Autonomous AI and the End of Work as a Social Contract

Diagrammatic illustration of power concentration and social fragmentation caused by autonomous AI systems.

From Tools to Actors

For decades, technology replaced effort while preserving structure. Machines accelerated production, software streamlined coordination, and digital systems optimized decision-making. Work remained the organizing principle of modern societies. Employment anchored income, identity, legitimacy, and political participation.

Autonomous and agentic AI break this continuity. These systems do not merely assist human workers; they act. They execute sequences, pursue objectives, adapt to changing conditions, and increasingly operate without meaningful human intervention. The shift is not incremental. It is categorical.

When systems act, the human role changes from execution to oversight. When systems coordinate, the human role shifts from decision-making to exception handling. When systems learn faster than organizations can adapt, human involvement becomes optional rather than necessary.

This is not a debate about whether AI can do a job better than a human. It is about whether the job remains economically relevant at all.

The Asymmetry Shock

Autonomous AI does not spread evenly. It concentrates where compute, data, capital, and institutional capacity already exist. Large organizations adopt agentic systems first because they can absorb failure, legal exposure, and integration costs. Smaller entities either adapt late or disappear.

This produces an asymmetry shock. Productivity gains accrue to a narrow segment of firms and regions, while job losses diffuse broadly across sectors that lack bargaining power or political insulation. The result is not technological unemployment in the abstract, but structural redundancy.

Historical analogies fail here. The transition from agriculture to industry unfolded over generations and absorbed labor into expanding manufacturing ecosystems. The transition driven by autonomous AI compresses disruption into years while reducing the need for replacement labor.

Efficiency gains no longer require proportional human participation. That is the defining break.

Job Destruction Is Not a Transitional Problem

Much of the public discourse treats job loss as a temporary side effect of innovation. Workers are displaced, retrained, and redeployed. This assumption rests on the idea that new categories of work emerge at scale.

Agentic AI undermines this logic. It automates not only execution but coordination, analysis, and supervision. Entire workflow layers collapse simultaneously. The space where new jobs might emerge is narrower than before, and often requires high levels of abstraction, system literacy, or institutional access.

“This wave of automation is different because it replaces cognitive labor. The assumption that new jobs will automatically appear rests on historical patterns that no longer apply.”

Martin Ford

Reskilling is not a scalable solution when the bottleneck is not skill mismatch but structural redundancy. Teaching millions of people to perform tasks that no longer need human performers does not create employment. It creates friction.

In practice, job destruction outpaces job creation for extended periods. Some regions absorb this through welfare systems, others through informal economies, and many through social decay. None resolve it through retraining alone.

“A society that eliminates the need for labor without redefining political participation does not become free. It becomes unstable.”

Hannah Arendt

The Collapse of Supervision as a Universal Model

Human-in-the-loop oversight is frequently presented as the ethical and practical solution to autonomous systems. In reality, it is a privilege.

Effective supervision requires time, expertise, institutional authority, and legal clarity. It assumes stable organizations, manageable system complexity, and incentives aligned with safety rather than speed. These conditions do not exist everywhere.

As agentic systems scale, supervision degrades. Oversight becomes symbolic, procedural, or retrospective. In many deployments, it disappears entirely. Systems operate because stopping them would be more costly than tolerating their errors.

The idea that every autonomous system will remain meaningfully supervised is a comforting fiction. In most contexts, autonomy will be the default, not the exception.

Power, Knowledge, and the New Divide

AI does not democratize intelligence. It amplifies existing asymmetries in knowledge and control. Those who design, deploy, and govern autonomous systems gain leverage over those who interact with them passively or depend on their outcomes.

“When advanced systems drive productivity, control over those systems matters far more than participation in the broader economy.”

Nick Bostrom

This creates a renewed divide between those who understand systems and those who are subject to them. Not a divide of education alone, but of agency. Knowing how a system works is not the same as being able to influence it.

As economic relevance concentrates, political influence follows. Institutions adapt to protect productive systems rather than displaced populations. Legitimacy shifts from representation to performance.

In this environment, knowledge becomes both a shield and a weapon. Access to it determines not just opportunity, but safety.

Neo-Luddism and Machine Hatred

When systems displace people faster than societies can absorb them, resistance emerges. Not always ideological, often visceral. Infrastructure becomes a target because it is tangible, visible, and symbolic.

Attacks on machines, sabotage of automated systems, and hostility toward algorithmic decision-making are not historical anomalies. They are predictable responses to perceived dispossession.

Labeling such reactions as irrational misses the point. They are expressions of lost agency. When participation in the system no longer yields security or dignity, destruction becomes a form of communication.

Whether societies interpret this as criminality, political resistance, or system failure will shape their response. Suppression, accommodation, or redesign are all possible. None are painless.

Divergent Futures

Not all regions will experience this transition equally. Some states will stabilize through centralized coordination, surveillance, and controlled redistribution. Others will fragment under the strain of unemployment, weakened institutions, and social mistrust.

Freedom becomes conditional. In some places, it is traded for stability. In others, it erodes without compensation. The idea of a shared global trajectory dissolves.

Technology accelerates divergence. Autonomous systems reward coherence and punish disorder. Regions unable to maintain institutional continuity fall further behind, not because they lack talent, but because they lack structure.

Europe and the Illusion of Delay

Europe often frames regulation as a buffer against technological disruption. In reality, it functions more as a delay mechanism. Rules slow deployment, shape incentives, and redistribute some risks, but they do not reverse structural trends.

Barcelona and similar hubs may cushion the impact through public-sector adoption, creative industries, and regulated experimentation. This buys time, not immunity.

The underlying dynamics remain unchanged: autonomous systems reduce the need for human labor, concentrate power, and strain the social contract that sustained industrial democracies.

Delay can be valuable. It allows adaptation. It also fosters denial.

If autonomous AI renders large segments of human labor economically irrelevant, what remains as the basis of social legitimacy?

If only a minority can meaningfully design, supervise, or control these systems, is democracy compatible with that reality, or does it become performative?

If violence against machines and infrastructure increases, will societies treat it as crime, resistance, or a signal of systemic failure?

If some regions stabilize through authoritarian coordination while others fragment, does freedom remain a universal value or a local luxury?

If meaning, safety, and prosperity concentrate among those who control knowledge systems, what obligations—if any—do they have toward those who do not?

And if no such obligations can be enforced at scale, what kind of civilization is being built?

Potser la qüestió ja no és què farà la intel·ligència artificial amb nosaltres, sinó què farem nosaltres quan ja no ens necessiti.

Comments

Popular posts from this blog

Emergent Abilities in Large Language Models: A Promising Future?

Barcelona: A Hub for AI Innovation Post-MWC 2024

Multimodal AI: Application Areas and Technical Barriers

Labels

Show more