The Realpolitik of the Permanent Underclass
Or why there is No Escaping the Permanent Underclass
I have a few friends who are now feeling the AGI. They are techie enough to use a tool like Claude Code and see what it can do. They are smart enough to extrapolate and see the possibilities.
Sadly, they now realise that it is likely that all white collar labour is going to get automated in the next 5 years. As a result, many of them are re-discovering the meme of “Escaping the Permanent Underclass” from first principles.
It starts from the observation, through Cursor Agents or Claude Code, that AI is good enough to understand and execute complex plans. Some minor extrapolation shows that given the current rate of progress, all white-collar labour is going to get automated soon. Let’s say in the next 5 years.
After that point, it is obvious their smarts and tech proclivities won’t matter and that they are going to get out-competed by AIs. Plausibly, there will always be something that they can do, but there is a solid chance that they won’t, and they would like to not risk losing everything if this came to happen.
Their reasoning ends with the conclusion that they must amass enough capital or build enough income streams based on AI automation before this happens. Else, they will permanently become part of an underclass that has zero prospect of making big bucks, at the mercy of potential government redistribution programs.
—
Consider this very short article from Tyler Cowen, a famous economist who has been a bit skeptical about the prospects of AI.
If that describes you or someone you know, this article is for you. In it, I will explain that this line of reasoning misses how AI changes the landscape of politics and geopolitics.
Titles of property, shares and numbers on a ledger won’t protect anyone against swarms of autonomous killer drones. This is true regardless of whether said drones are controlled by private corporations, massive governments, or rogue AI systems.
In short, there is no escaping the permanent underclass, for the permanent underclass will include everyone who doesn’t have direct control over the most powerful AI armies.
Strategic Interests
Recently, a dispute occurred between the Department of War and Anthropic, over the use of AI systems as autonomous weapons and for mass surveillance purposes. Many commentators felt uneasy about the situation, and what it meant for freedom of contract and the like.
These commentators may not be ready for what is to come. This is the weakest these tensions will ever be. As time passes, AI systems only become more relevant to the strategic interests of governments.
As of 2026, Lethal Autonomous Weapons (LAWs, the high-brow name for “killer robots”) are being developed and used. Wikipedia has a long article dedicated to the race for LAWs, which it describes as:
A military artificial intelligence arms race is an economic and sometimes military competition between two or more states to develop and deploy advanced AI technologies and lethal autonomous weapons systems (LAWS). The goal is to gain a strategic or tactical advantage over rivals, similar to previous arms races involving nuclear or conventional military technologies.
Coincidentally, China has built the habit of parading larger and larger swarms of drones. For instance, Chinese entities hold many of the Guinness World Records entries for drones swarms. It started with 7.6k in September 2024 from Damoda (a Chinese drone swarm company), to 11k in July 2025, to 16k in October 2025 and to now 22.5k in February 2026.
—
While Think Tanks like the RAND Corporation have been writing for years about the interaction of AI and Cyber warfare, we now have concrete examples. In November 2025, Anthropic has publicly disclosed that China used Claude Code to orchestrate large-scale cyber attacks against western targets.
But this is not limited to virtual contexts. Beyond cyber warfare, AI is now used in “regular” warfare. Specifically, it is LLMs-based AI that is used, the same technology that powers ChatGPT and Claude. This is how the Financial Times introduces its article on the usage of AI in the US-Iran war:
AI is reshaping how the US military makes decisions in war — a shift clear in Iran, where the Pentagon says it struck more than 2,000 targets in just four days.
The unprecedented tempo of targeted attacks has been driven in part by AI systems that sift the torrents of intelligence data from drones, satellites and other sensors, generating strike options far faster than traditional human-led planning.
The conflict also marks the first battlefield use of “frontier” generative AI models, with AI tools widely used by civilians — from office workers to doctors and students — helping commanders interpret data, plan operations and provide real-time feedback during combat.
—
The above are existing threats, and they are already worrying. But with AI comes novel threats. While Weapons of Mass Destruction (WMDs) have historically required secretive programs, AI is democratising the development of chemical and biological weapons.
In a more prospective direction, AI-enabled technological development will help with the creation of novel weapons, that could provide a strong first-mover advantage to those building them, which RAND dubs “wonder weapons”. Such wonder weapons would be enough for the power that builds them first to promptly militarily dominate the rest of the world.
The most obvious wonder weapon is autonomous armies made of robots and drones. Any army that manages to automate away its need for soldiers, drone pilots, and even its generals, will hold a decisive advantage against armies that still depend on humans. Humans must sleep, but robots and AIs can track targets relentlessly. AIs can process information from hundreds of sources, quite faster already than humans: if you doubt it, try to out-speed an AI agent on a search query over the Internet or a long text document.
Less prospectively, AI can already be used for mass surveillance. The vision for mass surveillance that Orwell depicted in 1984 was unrealistic because of the scale of surveillance infrastructure it needed. From microphones and cameras in every home to the near constant manual monitoring of everyone, it was not tractable.
Nowadays, smart microphones cost less than 10$ and existing AI systems are already powerful enough to process that information at scale. Processing 2 hours of speech per day through AI systems would cost ~$285 per year.1 For comparison, in the US, the average yearly SNAP (food stamps) benefits amount is $2,244, and the average yearly retirement benefits amount is $24,852 (according to Social Security). AI-assisted surveillance is thus one to two orders of magnitude cheaper than these programs.
—
AI is already used for war and mass surveillance. It will only integrate itself more in these processes and become more strategic over time. For now, this is happening largely by automating old means, like with autonomous killer drones as well as AIs processing camera feeds and private correspondence.
However, a few actors are determined to build super-intelligences that can do more than that and create novel technologies, bringing war from autonomous killer drones to wonder weapons, and bringing mass surveillance from AI processing to dystopian BCIs that read people’s thoughts.
If, when thinking of AI, your mind goes to “AI will automate a lot of my trade”, you are severely underestimating the impact of AI. More importantly than automating generic white-collar labour, AI is being used to automate and amplify critical industries, regime stability (“domestic security”), and war.
In other words, understanding the Permanent Underclass as a separation between the haves and the have-nots is a very Marxist mistake, that applies to a technologically static world, and not one where AI is currently developed to entirely replace and exceed humans.
Severing the Interconnectedness of the Modern World
To some of my readers, the above will be enough. By re-contextualising AI in terms of its impacts on conquest and domination, they will naturally conclude that such considerations dwarf any discussion of a neo-proletariat in the AI era.
But at this point in the essay, most will still feel unsatisfied. Indeed, important questions remain unaddressed. For instance, if not between the AI capitalists and human labourers, what would be the sides in the conflicts downstream of AGI?
After all, world-altering technologies are nothing new. We built nukes in the middle of the 20th century, and somehow, democracies remained. Since 1945, entire business empires rose and fell, frontiers got redrawn.
In the end, property rights have always been backed by state violence, a medieval lord could always claim back the land rights of a peasant by virtue of having power.
If one claims that AGI will be the end of history, they must actually argue their case.
—
AGI completely changes the power dynamics at play in our world.
For now, in our current world, the fate of the world is largely decided through that of supra-human entities, like states, political parties or corporations. And for the better, these supra-human entities all strongly depend on humans for their power.
Without human labour, states cannot fill the ranks of their armies, cannot supply theirs with equipment, cannot maintain borders, and cannot keep defence systems operational. In other words: without human labour, states cannot defend themselves. To maintain their continued existence, states must manage to keep people alive and make productive use of them.
In our modern world, illiterate idle people are not very useful. States thus must ensure that their people are educated and motivated to some extent. Ambitious states – that do not merely survive, but actively expand their power – must also ensure that they have a large population of creative and driven intellectuals. Nurturing such a population takes more than basic survival and primary education.
Similarly, successful political parties must manage to appeal to voters; successful corporations must manage to appeal to their customers, employees, middle management and C-suite all at the same time, and so on. To the extent these organisations become more powerful, it is virtually only because they manage to gather the support of more people, whether it is voters, customers, employees or regulators.
That is to say, we live in a world where the worst mega-structures still depend on people and their approval.
—
There is a famous essay from 1958, entitled I, Pencil. It is fairly short, 5 pages, and I recommend reading it. But if you don’t feel like switching to another essay right now, here is its Wikipedia summary:
"I, Pencil" is written in the first person from the point of view of a pencil. The pencil details the complexity of its own creation, listing its components (cedar, lacquer, graphite, ferrule, factice, pumice, wax, glue) and the numerous people involved, down to the sweeper in the factory and the lighthouse keeper guiding the shipment into port.
In other words, the pencil essay is a beautiful reminder of the interconnectedness foundational to our modern civilisation. Some people see it as a case for free markets or capitalism, others for the rule of law or that of governments. But everyone projects there what they like to see.
Personally, I like to see it as a symbol that we all need each other. Regardless of the precise economic mechanisms behind it, “I, Pencil” makes it clear that if not for millions of people working together, we would fail to obtain a single modern pencil, let alone a smartphone or a laptop.
In the world of “I, Pencil”, for the better and for the worse, we need each other. We all have different skills, different mindsets, and in general, different comparative advantages. In other words, we can always be helpful to each other.
Right now, supply chains are hopelessly complex. No one can hope to control them top-down, or to do away with the need to be nice to other people. While it makes sense to gain some amount of autonomy, no country is deluded enough to think that it can be entirely self-sufficient, without depending on a strong trade infrastructure.
The simple truth is that No man rules alone. The most selfish dictator still relies on their lieutenants, their trade infrastructure and their populace. This truth applies to people, countries, corporations, political parties and NGOs alike. A pencil needs millions of people spread across countless companies. A modern country needs much more.
—
At a fundamental level, this is what AI Automation changes. This is the falsified assumption that breaks the naive extrapolation from the post-WW2 world order.
The federal minimum wage in the US is $7.25 an hour. For comparison, Unitree’s H2 robot consumes less than 1kW, which costs roughly $.40 per hour in the more expensive states. Running an LLM instance distributed across many users (as ChatGPT and Claude are) costs a comparable amount of energy.
In other words, once there are robots that are as dextrous as human bodies and LLMs as generally capable as human brains: there are no incentives to keep humans in the supply chains at all anymore. Governments, corporations, political parties, all of them can then replace and ignore humans.
With such powerful AI systems, it becomes possible for a few individuals to have control over enough artificial bodies and minds that they can craft a pencil without a single human in the loop. Building factories, extracting resources, establishing complex supply chains, all of this becomes possible at the push of a button, without needing people. Domestic surveillance and enforcement becomes possible without policemen nor judges. Military strikes become possible without soldiers nor generals.
To put it simply, as we get closer to AGI, there is a shift in the incentives of the big corporations like OpenAI and Anthropic, or of the country superpowers like the US or China. For now, they must contend with the whims of their customers, investors, employees and citizens: that is the only way for them to grow and acquire more resources. Even the proverbial medieval lord mentioned at the beginning of the section needed peasants to exploit the land.
As this stops being the case, everything falls apart: none of the nice things that are downstream of this reliance on people are guaranteed anymore. Nice things like the rule of law, social protections or not regarding people as pets. The power equilibrium shifts.
Beyond Classes
A simplistic way to understand the phenomenon above is to think in terms of “classes”. This is simply a conflict between the class of AI rulers and the AI poor.
However, the concept of “classes” relies on assumptions that are falsified with AGI. There is a reason for why class conflicts exist, for why we do not just see entire classes be annihilated. That is, a society necessitates both workers and capitalists, managers and employees, politicians and citizens. This mutual dependency is why class conflicts exist and persist in the first place: because neither side can afford to completely eliminate the other one.
With AI, there is no need for an “underclass” to serve the rulers. The class of AI rulers does not need the AI poor. They do not need to maintain the morale of their soldiers or get money from consenting customers to expand, they can simply deploy swarms of autonomous drones and appropriate all the resources they need.
Thus, in the Realpolitik frame, what makes one part of the AI ruling class is whether they have direct control over the AI systems, not whether they have “legal rights” over them. Shares in AI Corporations, titles of properties over the land below data centres, even political offices; none of them offer much in the way of protection against a swarm of drones that can relentlessly go after all of one’s relatives and associates.
To be clear, there is no need for the AI rulers to proactively be this violent. Realpolitik is only about their relative power and their incentives. Their incentives already are to take all that they can. What changes with AGI is that people have no collective bargaining chip (let alone individually!) anymore and can offer no resistance. Thus, there will simply be no meaningful pushback against the incentives of the AI rulers to take everything. The AI rulers will get everything, while people will just get completely disempowered and dispossessed.
—
If one steps outside of the Realpolitik frame, there is one idealistic way to think about the AI ruling class. It is to think that the AI ruling class will be nice with you. This hope has many colours, depending on who one believes the AI ruling class will be.
The typical PR from the AI Corporations make for a great example of this. Just trust them to build AGI and that it will benefit humanity.
From OpenAI’s About page:
Our mission is to ensure that artificial general intelligence benefits all of humanity.
From Anthropic’s homepage:
At Anthropic, we build AI to serve humanity’s long-term well-being.
From Google DeepMind’s About page:
Our mission is to build AI responsibly to benefit humanity.
Another tangent’s hope is trusting one’s own government with building ASI and to ensure it at least benefits its own population.
—
But I think the worst version of that hope is the one where the machines will enslave us because we are useful to them or have some type of human as pets.
This idea of AIs keeping humans as pets is neither isolated nor is it recent. It predates ChatGPT by a long time. Consider this quote from Asimov, the author of “I, Robot”, back in a 1977 essay:
But if computers become more intelligent than human beings, might they not replace us? Well, shouldn’t they? They may be as kind as they are intelligent and just let us dwindle by attrition. They might keep some of us as pets, or on reservations.
Then too, consider what we’re doing to ourselves right now—to all living things and to the very planet we live on. Maybe it is time we were replaced.
From Elon Musk, CEO of xAI and Tesla, in a 2015 interview, after being asked if AIs would domesticate humans:
We’ll be like a pet labrador if we’re lucky.
From Steve Wozniak, the cofounder of Apple, at the Freescale technology forum in 2015:
We want to be the family pet and be taken care of all the time.
—
In practice, sustaining the needs of billions of people takes a lot resources: land (we need space), energy (we are not self-sustaining), and more. Said resources can always be used for other things, such as the competing needs of the AI rulers.
Plausibly, AI rulers may feel some moral compulsion to not let billions starve, but human flourishing requires much more than being fed. It requires comfort, cultures, societies, autonomy and freedom. It is deeply incompatible with being at the mercy of an ever-powerful group or rogue AI system that could exterminate us at any time should it ever fancy doing so.
This is why all of the hopes mentioned above sit outside of the frame of Realpolitik: they are far too naive. Realpolitik may seem too brutal or cynical, but at a basic level, it is merely the realisation that if someone has power over someone else, it is wise to prepare for the eventuality where they use it. In a way, the natural conclusion of Realpolitik is that for things to go well, there must not be power imbalances that are too big between individuals and factions.
In other words, without any counter-power, absolute power corrupts absolutely. To continue quoting Lord Acton: “Everybody likes to get as much power as circumstances allow, and nobody will vote for a self-denying ordinance.” Thus, in a Realpolitik frame, no one should be trusted with absolute power: not even seemingly benevolent AI systems, and for sure not the AI Corporations.
Conclusion
There is no permanent underclass in the AI era. Humans simply become obsolete and redundant. They get replaced by artificial bodies and virtual minds. Supply chains and armies run autonomously.
As a result, no individual holds power. It resides in the hands of mega-structures like Corporations and Governments, that have automated all of their critical functions with AI.
Such mega-structures will be de facto competing for limited resources. As time passes, this competition will be less economic and more military, with humans being collateral damage in conflicts that are more and more violent.
In other words, the future at the end of the current AI trajectory doesn’t look like a CEO, POTUS, or even Claude dictating the terms for Earth. It looks like largely automated corporations, governments and other mega-structures waging economic war and military skirmishes with armies composed of LLM minds, robotic bodies and novel technology that has yet to be discovered.
This future is bleak and has no room for a Permanent Underclass nor a Permanent Ruling class. From a Realpolitik standpoint, humans simply do not matter there as they are weaker than AIs. At best, one may hope for a largely automated dictatorship that wins the AI wars and cares for humans in a non-dystopian way.
—
As humans, when we start a construction project, we level the terrain, including all the anthills there, worker ants and queens alike.
Similarly, building a lot of capital right now is irrelevant. It is thinking at the ant-scale instead of the real-estate development one.
Capital doesn’t protect one from the Realpolitik of the situation. An individual that has amassed a fortune in 2026 will not matter in front of autonomous military might. They get bulldozed over all the same.
In practice, this looks like human extinction. I mean it literally, and so did most of the top AI experts, who collectively warned about extinction risks from AI in 2023. I hope this piece helps with understanding why they did so.
—
At an extreme level, all that is needed is “just” autonomous mass surveillance and autonomous weapon systems, whether they control drones or the deployment of tactical weapons. These are sufficient to seize domestic control and military might, and the technology needed is well underway. Still, this can largely be prevented through historical methods, with treaties and conventions banning various types of WMDs. History has many precedents for this type of international coordination.
Things become different once we hit human replacement. Past that threshold, the mega-structures in power will not have vested interests in humans anymore. On the contrary, they would all benefit from fully replacing them as soon as possible. They would have no reason to pass such treaties. Whereas before, domestic surveillance and military might was useful to threaten people into submission, once supply chains and jobs are automated, the remaining people become irrelevant: as ants are to us.
Beyond Realpolitik, there is a sliver of hope. If we slow down the development of AI enough, representative governments may stay in power through this transition, and we may even adapt them to maintain a strong enough balance of power in the presence of widespread automation.
However, the development of superintelligent AI systems would erase that hope. Superintelligent AI systems would accelerate all the dynamics mentioned above through new technologies that we can hardly fathom now, while at the same time, greatly diminishing our chances of maintaining control over the AI systems.
—
Suffice it to say that I think this makes for a terrible future, that we are not doomed to pursue it, and that it would be a catastrophic failure to do so.
This is why I work on preventing the development of superintelligence and support the people and organisations working on this. If you are interested, I would naturally recommend checking out ControlAI.
On this, cheers!
All that follows is according to OpenAI’s pricing page.
2 hours of OpenAI’s text transcription costs 72 cents.
People speak at ~140 words per minute. Assuming 2 hours of speaking per day, that is 16,800 words. Summarising this into a 1,000 words summary with GPT-5.4 costs ~5 cents.
That makes for 77 cents per day, ie 285 dollars per year.


