Opportunity Knocks #119 - On AI-driven automation, who gets to decide?
Every week I share reflections, ideas, questions, and content suggestions focused on championing, building and accelerating opportunity for children.
A few weeks ago, the New York Times ran an article with the blunt headline, "This A.I. Company Wants to Take Your Job." The story detailed the mission of a start-up openly committed to fully automating human labor. Automation is already improving work (e.g., safer working conditions), but the practical conversation surrounding such an audacious mission to fully automate naturally prompts immediate questions: Which jobs and industries are at risk? How soon could full automation happen? What economic, cultural and social, and psychological impacts would follow full automation? This conversation arrives at a moment where not all automation feels inevitable or beneficial. Despite genuine promise, AI today, particularly in the context of kids, still has too many troubling limitations.
Beneath the current state of technology and the pressing practicalities surrounding automation lies a foundational question: Who gets to decide? Who should hold the power to shape these civilization-level trajectory-shaping decisions? Should private entities and ambitious entrepreneurs be empowered to dictate how our kids will work? Or should these decisions rest with citizens, collectively articulated through democratically elected representatives or some participatory process?
I’m pro-innovation. Market-driven advancements have improved billions of lives. But full automation, particularly automation driven purely by economic incentives, carries unknown unknowns that deserve deeper scrutiny. It’s easy to pick a corner on automation, to cheer or fear it. I’m also pro-entrepreneur. Innovators and company builders rarely set out to cause harm, but good intentions alone don’t prevent unintended societal consequences.
Periods of rapid technological change have been driven by the private sector, often as a result of government's slow pace of reaction, cautious approach to embracing new ideas, and inability to implement change at scale (obviously, exceptions like DARPA exist). Consider how industry has previously influenced major social and cultural shifts:
The automotive sector shaped urban planning and environmental policies.
The oil and gas industry shaped geopolitical dynamics and climate trajectories.
The social media giants shaped communication, influencing, for good and, well, awful, public discourse, privacy norms, and democratic processes.
These industry-driven inflection points happened with varying levels of public oversight. And while these changes brought meaningful progress, they also created unforeseen consequences that we’re still grappling with.
The current wave of AI-driven innovation may present a fundamentally different set of challenges and opportunities due to the scale, speed, and reach of AI in every potential facet of work (and our lives). We don’t know yet whether fully automating labor, if it happens, will lead us toward collective flourishing, worse inequality, a positive but complex mixture of both, or some dystopian version of the winner-take-all scenario.
There is interesting work happening to address these open questions. In Taiwan, for example, Audrey Tang, who served as the first Minister of Digital Affairs, established “Alignment Assemblies” that involved citizens in the governance of AI. Additionally, several academic papers propose AI-governance models.
And, yes, the federal government is involved (see: the U.S. Senate’s 99-1 vote on state-level AI regulatory moratoriums), but actual progress means more than surface-level oversight. To fully address the potential downsides of full automation, I think we will need a combination of:
Mandated public deliberation via assemblies or councils that consider the potential economic, cultural and social, and psychological impacts (in addition to the Taiwan model, other analogs include: France’s Citizen Climate Convention and Ireland’s Citizens' Assemblies that have taken on issues including drug use and biodiversity loss).
Mechanisms embedded in laws or regulations ensure that decisions made by automated systems or AI can be questioned, understood, and challenged by the people directly affected by them. Industry is likely to fight any movement towards explainability and contestability tooth and nail.
Stronger separation of industry and government influence by dramatically reforming campaign finance to limit regulatory capture (e.g., via publicly financed elections, limits on corporate contributions, lobbying reform, revolving door restrictions, and improved disclosure laws).
None of these recommended actions will be easy to implement. They would face attacks that they are expensive, anti-technology, and ideological driven (depending on where one sits on the spectrum, but likely with complaints from insiders, outsiders, right, and left). I don’t see them that way. I see a budding movement to ensure the technology we build today aligns with society’s longer-term values and well-being.
Until next week, be calm and be kind,
Andrew