What Holds When You're Worthless?

· 19 min read

I’m writing this with a tool that will make me unemployable.

Not metaphorically. Dario Amodei — CEO of Anthropic, the company that built this tool — said at Davos that AI would do “most, maybe all” of what software developers do within a year. “Nobel-level” research within two. You can discount that as a CEO selling his product. The benchmarks are harder to dismiss.

I’m 25. I’m building a startup. I use Claude every day. I can feel the ground shifting under my feet in real time, and I’ve been sitting with a question I can’t resolve: if AI makes human labour economically worthless, what structural guarantees survive that? Not philosophically. Mechanically. What actually holds?

The resource curse, but the oil is infinite

There’s a well-studied phenomenon in economics called the resource curse. Countries that discover vast natural resources tend to get worse, not better. Michael Ross showed that oil-rich countries are 50% more likely to be autocratic and twice as likely to descend into civil war. The mechanism is simple: when a government’s income comes from pumping oil instead of taxing citizens, it stops needing citizens. No taxation, no representation. Saudi Arabia, Venezuela, Congo — rich in resources, poor in democracy.

The analogy isn’t exact — AI revenue flows to companies, not governments. But the structural dynamic is the same: when economic value can be generated without mass human participation, the incentive to serve mass human interests weakens at every level.

The argument I keep returning to — from the gradual disempowerment thesis and The Intelligence Curse — is that artificial intelligence is a resource curse. But worse. Because oil doesn’t replace the people who extract it. AI does.

As The Intelligence Curse puts it: “AGI looks a lot more like coal or oil than the plow, steam engine, or computer.” There’s no “other sector” to pivot into when the sector being automated is thinking itself. Every general-purpose technology in history — steam, electricity, computing — automated cognitive tasks and created jobs nobody predicted. I may be wrong. But the burden of proof should sit with those who say it won’t matter, because the downside of being wrong is civilisational, and the biases that prevent action mean we won’t get a second chance to respond.

The US horse population peaked at 25 million in 1920 and collapsed to 3 million by 1960. Nobody negotiated lower wages for horses. The entire economic category was abolished. The horse problem had no political constituency. The human problem does — for now. And “worthless” may understate where this trajectory leads. When a worker’s full cost — healthcare, housing, compliance, emotional maintenance — exceeds zero even at no salary, they’re not just uncompetitive. They’re a liability. The trajectory doesn’t stop at zero.

That’s the intelligence curse. And unlike oil, you can’t diversify away from it.

The invisible mechanism

There’s a distinction that most AI discourse misses entirely — one that I think explains why the popular responses all fail and why the window is closing faster than anyone perceives.

There’s explicit alignment — the AI follows your instructions — and implicit alignment — societal systems structurally depend on human participation, so they serve human interests whether anyone intends them to or not.

A factory owner doesn’t pay fair wages because he’s moral. He pays them because the factory doesn’t run without workers. A government doesn’t grant rights because it believes in them. It grants them because it needs taxes and soldiers. The alignment between institutional power and human welfare isn’t designed — it’s a structural byproduct of dependence. Remove the dependence and the alignment evaporates, regardless of anyone’s stated values. That’s implicit alignment. It’s the gravity that holds the whole system together, and we’ve never had to think about it because it’s never been absent.

“The significance of implicit alignment can be hard to recognize because we have never seen its absence.”

We’ve been spending billions on explicit alignment — making the AI follow instructions. Almost nobody is working on implicit alignment — preserving the structural reasons institutions have to serve human interests at all. And implicit alignment is the one that actually kept humans politically relevant for centuries.

This is why the popular answers fail. They’re all solving the explicit problem — making sure the AI does what you ask. Nobody is solving the implicit problem — preserving the structural reasons institutions have to include you at all.

I’ve spent weeks stress-testing every solution people are proposing. Two answers keep coming up. Both break on contact with the structural problem.

”UBI will fix it”

Maybe. But who funds universal basic income when citizens produce nothing of economic value? Today’s redistribution works because governments tax productive activity. In a world where AI generates most economic output, UBI becomes a gift from whoever controls the AI — not a right, but a patronage system. The real axis isn’t redistribution versus markets. It’s patronage versus structure. Do you give people enough to survive within a system they don’t control, or do you rebuild the system so the means of cognitive production are distributed by design?

Governments already redistribute about half of GDP, and AGI could grow the pie so massively that even a modest slice is transformative. During the transition, governments still need citizens as consumers and voters, which preserves some leverage. But that leverage erodes as AI substitution increases. And even where UBI has been tried — Finland, Stockton, GiveDirectly — the evidence shows unconditional cash doesn’t destroy purpose, but it doesn’t create it either. Money clears the floor. What people build on it depends on community, identity, and narrative. UBI solves the income problem. It doesn’t touch the meaning problem or the power problem.

”Personal AI clones keep you relevant”

I wanted this one to work. The idea: fine-tune an AI on your knowledge, your style, your expertise, and deploy it as your economic agent. Several startups are building this right now.

It breaks in three ways. First, a fine-tuned model of you cannot compete with the frontier general model long-term. Second, you’re building your clone on someone else’s infrastructure — you can’t entrepreneurialise your way out of a system designed to capture the returns before you see them. If your AI clone runs on OpenAI’s API, routes customers through Google, and processes payments through Apple — you’re not an entrepreneur. You’re a tenant farmer on digital land you’ll never own. Third: it’s a market solution to a structural problem. Making individual humans slightly more competitive doesn’t change the market’s trajectory. It might also accelerate the very displacement it claims to prevent.

The closing window

Here’s where this gets urgent.

There’s a historical pattern that I think most people haven’t connected to AI. Modern states emerged as unintended byproducts of warfare. Kings needed mass armies, which required mass taxation, which required consent — a bargaining loop that produced representation as a byproduct. Democracy required a specific class structure created by specific economic conditions: “No bourgeois, no democracy.” Military technology shapes viable social structures. Mass warfare created mass enfranchisement.

Liberal democracy didn’t emerge because humans are inherently wise or because rights are self-evident. It emerged because empowering citizens was economically and militarily fit. Mass production needed mass workers. Mass warfare needed mass soldiers. The bargain was: we’ll give you rights because we need you.

AI breaks both sides of this bargain simultaneously. This is what implicit alignment collapsing looks like — not a dramatic betrayal, but the quiet evaporation of the structural reasons anyone had to include you in the first place.

States won’t need your taxes when AI generates the revenue. They won’t need your body when drones do the fighting — Ukraine now says more than 80% of enemy targets are destroyed by drones, with AI guidance raising kill rates dramatically. Military planners call it “trading blood for steel.” When states need neither your money nor your body, the historical basis for your rights evaporates.

The timeline is now. SWE-bench Verified — the benchmark for AI solving real software engineering tasks — hit 80.9% in February 2026, up from single digits when it launched in mid-2024. METR shows autonomous AI task capability doubling every 7 months over 2019–2025 — but every 4 months since 2024. The doubling time itself is shrinking. Patrick Collison, watching actual purchasing data across millions of Stripe businesses, said: “There’s at least a reasonable chance that 2026 Q1 will be looked back upon as the first quarter of the singularity.”

Meanwhile, Goldman Sachs reports “no discernible impact on major labor market indicators.”

That gap — between the benchmarks and the economy — is itself the signal. And the gap may be more damning than it looks. GDP was designed to count transactions, not welfare. When AI makes legal advice free, GDP registers the legal sector collapsing — not the benefit to billions. The instruments are structurally blind to abundance. The dashboard most confidently says nothing is happening precisely when the economy is most aggressively transforming. So “no discernible impact” may not mean the storm hasn’t started. It may mean the barometer is broken.

The storm hasn’t hit yet. The barometric pressure has been dropping for three years. And when formal governance can’t keep pace with this kind of change, governance doesn’t vanish — it shifts from democratic to private. Corporate terms of service become de facto law. Infrastructure choices become de facto constitutional decisions. The entity that builds the default captures the constitutional moment — not through conspiracy, but through physics. Power flows to the fastest actor in a vacuum.

The lesson from the labour movement is simple and brutal: justice prevailed because workers had leverage — and the institutional infrastructure to exercise it, unions, legal frameworks, decades of collective organising — and used it before it was too late. The eight-hour day, the weekend, the abolition of child labour — none of these were gifts. They were extracted by workers whose leverage was simple: capital cannot produce value without labour. A factory without workers is an expensive building. The Writers Guild won in 2023 because studios still needed writers — and even then, with maximum leverage and public sympathy, the best-organised creative workforce in the world won a contract cycle, not a constitutional amendment. That’s the difference between tactical and structural. And that leverage may not exist in five years.

During the Industrial Revolution, real wages stagnated for roughly 50 years despite GDP growth — the Engels Pause. The gains took half a century to reach workers. We may not have half a century this time. And when the gains did arrive, it was because workers organised and demanded them while they still had something the system needed.

You don’t negotiate the constitution after the new government is running. Norway set the rules for its wealth fund early, before the real scale of revenue hit. The window is now, and it’s closing.

What structural guarantees could even work?

I’ve been looking for technical mechanisms that enforce economic participation rights at the infrastructure layer — guarantees that survive even if political institutions fail. The honest answer: they don’t exist yet.

The unglamorous truth is that the tools which have actually protected people — antitrust enforcement, regulatory agencies, disclosure requirements, social insurance — are incremental and administrative. They don’t feel like constitutions. They feel like bureaucracy. And they work. The question is whether they can be built fast enough for a technology moving faster than any regulatory apparatus ever has.

The technical primitives exist in isolation — proof-of-personhood systems like World Network, compliance-locked AI chips, zero-knowledge proofs, trusted execution environments — but nobody has composed them into a system that guarantees economic rights, and each breaks at the trust boundary. The closest thing to engineering implicit alignment directly is civic technology — Taiwan’s vTaiwan platform uses machine-learning-assisted deliberation to produce actual legislation, and quadratic voting mechanisms structurally resist plutocratic capture. These are real, deployed, and they work. But they work within democratic systems that still function. They deepen participation — they don’t guarantee it when the structural basis for participation erodes.

The most promising alternative isn’t personal AI as economic competitor but as fiduciary — an AI with a legally enforceable duty of loyalty to you, like an attorney. Not competing in the market on your behalf, but closing the information asymmetry between you and the institutions that shape your life. A billion AI fiduciaries that surface patterns of institutional abuse could create the preconditions for collective action even when economic leverage is gone. The basis of political relevance shifts from “you need my labour” to “you cannot deceive me without exposure.” But the fatal objection holds: your loyal agent runs on someone else’s base model, whose values are set by someone else. Loyalty at the application layer on top of captured weights at the foundation layer is a contradiction nobody has resolved.

Sam Altman’s “Universal Basic Compute” — everyone gets a slice of GPT-7 — is the closest thing to a protocol-level claim on AI output. But the enforcement mechanism is one company’s promise. That’s patronage, not structure. The best achievable guarantee might be weaker than anyone wants: not preventing violations, but making defection publicly visible and provable. And even that depends on someone caring enough to act — which brings us back to the leverage problem. Which brings us back to the window.

Maybe it’s impossible

This is the section I didn’t want to write.

There’s a strong argument that structural guarantees against human economic irrelevance are simply not possible. Not because the technology fails, but because we fail. We’ve already demonstrated this.

In 1930, Keynes predicted that his grandchildren would work 15-hour weeks. He was right about the productivity gains — we’re roughly 6x more productive than in 1930. He was wrong about the outcome. We converted the abundance into larger houses, bullshit jobs, and anxiety. There’s a political dimension: a productive population with abundant free time is a threat to power. Work disciplines people. Idle people organise.

Keynes identified the hard problem as psychological, not technological — then waved it away, assuming people would figure out meaning once the material problem was solved. They didn’t. The inability deepened.

We already failed this test once. The AI moment is the same test at higher stakes and faster speed.

The Marienthal study haunts me. Sociologists studying an Austrian factory town after the plant closed in the 1930s found that prolonged joblessness didn’t radicalise people. It crushed them. They became passive. They stopped reading, despite having a free library. They stopped keeping time. The defining feature wasn’t anger — it was apathy. But here’s what’s easy to miss: Marienthal was unemployment without social infrastructure — no community programmes, no retraining, no institutional support. The collapse wasn’t inevitable; it was the specific outcome of displacement with nothing underneath. The question isn’t whether humans can find meaning outside production — the research on flow states, craft, caregiving, and civic life suggests they can. The question is speed. Psychological adaptation to a new identity anchor takes decades. The disruption timeline is years. That mismatch is the real catastrophe risk. Case and Deaton’s deaths of despair research finds the same pattern in deindustrialised America: it’s not a money crisis. It’s a meaning crisis. The deaths are downstream of the collapse of a way of life — and the speed at which it collapsed.

The optimistic case — that AI grows the pie so fast everyone’s better off in absolute terms — might be right. But absolute improvement with zero structural power is what the Gulf states already have. That’s not freedom. That’s a gilded cage.

I’d update my view if I saw democratic institutions actively encoding AI economic rights — not studying the issue, encoding them. Or a technical mechanism for guaranteed compute access that doesn’t depend on corporate goodwill. I haven’t seen either.

But hoping the powerful are benevolent isn’t a plan. It’s prayer dressed as strategy.

History tells a hard story about guarantees. Every durable guarantee requires three things: enforcement independence, an economic foundation, and cultural entrenchment. AI removes the second. The question is whether the other two can do work they’ve never historically done alone.

They haven’t. The 13th Amendment abolished slavery in the letter; the substance was hollowed out for a century. Bretton Woods collapsed; the IMF persisted as a bureaucracy, but the structural guarantee broke. The UDHR works as a rhetorical weapon, not an enforcement mechanism — dissidents cite it, and it doesn’t prevent atrocities. Weimar’s Article 48 — an emergency clause designed to protect the republic — was the mechanism that destroyed it.

Maybe the question itself is wrong. “What holds when you’re worthless” accepts a definition of worth — economic productivity — that wouldn’t survive five minutes of moral philosophy. A child isn’t worthless. A retiree isn’t worthless. The entire capabilities tradition argues that human entitlements are prior to economic contribution, not derived from it. I believe that. But I also notice that every society which has successfully protected those entitlements did so while its economy still needed people. The moral argument has never yet held without the structural argument underneath it.

So maybe the answer was never economic in the first place.

Hannah Arendt drew a distinction between labour (biological survival), work (building durable things), and action (appearing before others, initiating something new). Worth, in her framework, is a question of whether you’re a who or a what. The Ubuntu philosophy says it more simply: a person is a person through other persons. Worth grounded in mutual recognition, not production.

But the honest addendum is that most of us, in industrialised societies, have been given almost no practice in finding meaning there. We’ve spent a century building our identities on what we produce. Asking people to suddenly anchor meaning in relationship when work disappears — that’s not a plan either. And I should acknowledge the frame I’m working in: this is a Western anxiety about losing something much of the world never fully had. For the Global South, the question is sharper and more immediate. The BPO sector alone employs roughly 50 million people across India, the Philippines, and sub-Saharan Africa — and it’s being automated now, not in five years. The between-country inequality convergence of the last three decades, the single best economic story in the world, could reverse. Five hundred million people were counting on a development pathway that may close before they can use it. That story deserves its own essay, not a paragraph in mine — but pretending this is primarily a question for knowledge workers in rich countries would be dishonest.

Why you probably won’t do anything about this

There’s a reason I think this essay is probably useless.

Everything I’ve argued depends on people perceiving a slow-moving structural threat and responding collectively before it’s too late. The entire history of cognitive science says they won’t. Even after reading this essay, you almost certainly believe — at a level deeper than reason — that you personally will be fine.

The labour movement worked because factory conditions made exploitation visceral. You could see it, smell it, feel it in your body every day. The threat was legible to System 1 — the fast, automatic cognition that actually drives behaviour. AI displacement won’t create those conditions. It will be experienced as a series of individual career disappointments, not as a shared catastrophe. A developer whose output is gradually augmented, then supplemented, then replaced never experiences a discrete moment of loss that would trigger collective action.

And the measurement instruments reinforce the blindness. GDP counts transactions, not welfare — when AI makes legal advice free, GDP registers the legal sector collapsing, not the benefit to billions. Employment statistics were designed for an economy of discrete jobs, not continuous augmentation. The dashboard says normal. The capabilities say unprecedented. The gap between those readings is where the future is being decided — invisibly, unmeasured, and therefore ungoverned.

The window closes not with a dramatic betrayal but with a slow failure of perception. And that may be the most honest thing I can say about it.

What I’m actually doing

I don’t have a resolution.

I use Claude for hours every day. Right now, it’s extraordinary. I’m doing ten to a hundred times what I could do alone. If you could freeze this moment, AI is the most powerful amplifier individual humans have ever had. But you can’t freeze the moment. The trajectory is toward autonomous systems deployed by companies with capital — not amplifying individual humans, but replacing them.

There’s a term I keep coming back to: the anthropic bottleneck. The constraint on productive output is shifting from computational capacity to human cognitive speed — the bottleneck is becoming anthropic, human-shaped. Not because the AI is smarter in any meaningful sense, but because it’s faster, tireless, always available, and it scales in ways a person never will. I feel this every day. I’m the slowest part of my own workflow. And the company tightening that bottleneck — building the tool that makes the human the limiting factor — is called Anthropic.

I’m 25. I have maybe a few years where the skills I’m building are worth something in a competitive market. I know this. I’m building anyway — partly because the alternative is the Marienthal response (stop reading, stop keeping time, let the apathy in), and partly because I think the closing window argument means the things people build right now matter disproportionately. I’m writing this from a pub in London. I want my kids to grow up in the world I grew up in — or better. That’s all this is.

So if you build things — software, companies, institutions, policy — here’s what I think matters:

Demand structural guarantees while you still have leverage. Support policy that encodes economic participation rights — compute dividends, AI revenue taxation, mandatory public benefit clauses for foundation model providers. Lock it in while citizens can still withhold something the system needs.

Build things that distribute power, not concentrate it. If you’re a founder, ask yourself one question: does my product increase or decrease the concentration of AI economic returns? Put the answer in your charter now, while good intentions are free and incorporation is cheap.

Stay legible to each other. The Marienthal response is the default. Resist it. The value of political awareness increases exactly as economic leverage decreases — because awareness is the only thing that substitutes for leverage once the window closes.

I know what I’m asking. I’m asking you to perceive a slow-moving structural threat — the kind the human brain is worst at seeing, the kind the economic instruments can’t measure, the kind that will be experienced as individual disappointment rather than shared catastrophe. Every technological revolution produces a reckoning eventually. The question is never whether the crisis comes — it’s whether there’s an institutional imagination ready when it arrives, or whether the response is authoritarian by default. This essay is an attempt to make the invisible visible before that moment. Silence is the one response I know is wrong.

The labs are racing to build the new government. Nobody is writing the constitution.

#ai #economics #governance #disempowerment