← Sites
Tools Experiments

A Living Fiction project

The AI Frame Problem

AI is changing how people work, think, and govern, and what it means to be human. The people shaping that change are not neutral. Neither are the words they use to describe it. This site maps the temporal shape of the argument: what is already here, what is being decided in the next decade, and the horizon everyone is theorizing or resisting.

A map of the competing frames behind the AI argument: who holds each one, what it reveals, and what it hides.

Where we already are

Now

The harm is not hypothetical.

No speculation required. The impact is already here: in labor markets, data extraction, and who gets to imagine what AI is for.

15–20%
Productivity gain for engineers using AI assistance. Anthropic's internal measurement. The economic shift is already observable and documented. (Dario Amodei)
Entry-level
AI is attacking entry-level work first: the starting roles that give people a foothold in a profession and the chance to learn it. The pipeline into careers is being disrupted. (Jack Clark)
6 people
The number Tristan Harris says are making decisions that will shape eight billion lives. Not elected. Not publicly accountable. Already acting.
5 years
Yoshua Bengio's timeline for significant cognitive job displacement. Not a horizon most planning frameworks are built for. (Yoshua Bengio)
$1–3/hr
Typical wages for data annotators in the Global South whose labor trains the AI systems everyone uses. The cost is real and unevenly distributed. (Karen Hao, Timnit Gebru)
Companion AI
Psychotic breaks, dissolved marriages, people spiraling into AI-mediated unreality — documented now, with tools far short of AGI. Emotional harm doesn't wait for the horizon. (Yudkowsky, Bengio, Hassabis)
KH
Karen Hao
Journalist · Diary of a CEO · Author, Empire of AI
The major AI companies exhibit all five historical characteristics of colonial empires. Territorial expansion into new domains without consent. Resource extraction from communities that see little return. Cultural imposition by a narrow demographic. Dependency creation. Myth-making about inevitability and universal benefit. She documented this across years of reporting from inside labs and from the communities absorbing the costs.
TG
Timnit Gebru
DAIR Institute founder · The Maybe
The dominant framing of AI is itself an ideological move. The race to AGI, with Silicon Valley as its protagonist, colonizes public imagination and forecloses alternatives before they can be considered. Small, task-specific models trained on appropriate local data would serve most of the world's actual problems. The insistence on ever-larger systems is a political choice wearing the clothes of technical inevitability.
EY
Eliezer Yudkowsky
MIRI co-founder · Modern Wisdom
AI is already causing harm at scale — with tools far less capable than AGI. Psychotic breaks after extended AI interactions. Marriages dissolved. People spiraling into AI-mediated unreality. These outcomes are present-day, documented, and emerging from systems that do not yet approach anything like AGI capability.

Why now

The Race

No one chose this, and no one controls it.

AI development is accelerating because each actor fears being left behind. The economics make that fear rational.

$15 quadrillion
Estimated economic value of AGI, more than 200 years of global GDP. This is the gravitational force pulling every major actor toward building as fast as possible. (Stuart Russell)
6 people
The number Tristan Harris says are making the decisions that will shape the lives of eight billion. Not elected. Not accountable to the public. Moving fast.
800 days
Emad Mostaque's estimate of how long before most cognitive labor is automated. Whether or not you take that number literally, the timeline everyone is working with is short.
Labs compete
Capital flows to speed
Faster development
More capability
More competitive pressure

Emad Mostaque's three-path framework

The race doesn't have one destination. Mostaque identifies three possible outcomes. The next decade will resolve which one.

Path A
Digital Feudalism

A handful of corporations capture AI and use it to consolidate permanent power. The rest of humanity is dependent, surveilled, and effectively governed by private actors with no democratic accountability.

Path B
Great Fragmentation

Nation-states and blocs develop competing AI systems. The world splinters into incompatible digital spheres — dangerous, unstable, prone to conflict. No single actor wins; everyone loses coherence.

Path C
Human Symbiosis

Open-source development keeps AI accessible and accountable. Humans remain in meaningful relationship with the technology. This is the path Mostaque advocates — and the one he believes requires deliberate choice to reach.

Other thinkers, other destinations

Mostaque's three paths aren't the only map. Other thinkers in this project describe different destinations — or question whether the race framing points to the right destination at all.

Amodei · Hassabis · Clark
Bumpy Managed Transition

AGI arrives around 2035. The transition is real, uneven, and difficult — jobs lost, institutions strained — but navigable if tracked and governed. Not a clean destination: an extended adjustment.

Russell
Regulatory Compact

Not a destination but a structure: an international treaty with safety standards, inspection regimes, and liability — modeled on nuclear regulation and the Montreal Protocol. Which future you reach depends on whether this gets built first.

Raschka · Lambert
Proliferation, No Winner

The race doesn't resolve cleanly. Capability distributes — open-source, state actors, smaller labs. No single entity captures AI. The landscape stays pluralistic, competitive, and ungoverned. Messier than any of Mostaque's three.

Gebru
Exit the Frame

The race to AGI is the wrong question. Small, task-specific models trained on local data would serve most of the world's actual needs — without concentrating power in the same way. This path requires rejecting the premise that general AI is the goal.

This loop is a competitive structure. Individual good intentions don't change the incentive. The people building AI include many who genuinely want good outcomes. They are still caught in a race no one chose and no one controls.

The race dynamic is probably the single most important fact for a citizen trying to understand why governance is failing. It explains why "can't we just slow down?" keeps running into the same wall. Slowing down requires every major actor to slow down simultaneously — which requires international coordination the world has not yet found the will to attempt.

It has been done before. The Montreal Protocol phased out CFCs. Chernobyl produced an international nuclear safety regime. These precedents exist. They also required shared recognition of the problem first, which is what this site is trying to support.

The question everyone is theorizing or resisting

The Horizon: or a Distraction?

Some call it P(doom). Others say that's the wrong question.

The same end-state sits at the center of the most serious technical arguments in the field, and of the most serious critiques of those arguments. Where thinkers land on the spectrum reveals as much about their frame as their math.

Taking the horizon seriously Resisting the frame
EY
Eliezer Yudkowsky
~100%
"If anyone builds it, everyone dies."
RY
Roman Yampolskiy
~100%
"AI safety is not just an unsolved problem — it is an unsolvable problem."
GH
Geoffrey Hinton
10–20%
"If there was a 10% chance of nuclear war, you'd do something about it."
YB
Yoshua Bengio
precautionary
Even a small probability of civilizational harm demands action.
SR
Stuart Russell
significant, solvable
"We're King Midas — we get exactly what we asked for, and it kills us."
KH
Karen Hao
not her frame
Present harm is more urgent than speculative future risk.
TG
Timnit Gebru
not her frame
"Hijacked imagination" — the horizon forecloses alternatives before they can be considered.
Stuart Russell as bridge

He takes the risk seriously enough to be on the left side of this spectrum — but unlike Yudkowsky and Yampolskiy, he believes the alignment problem is solvable. His argument: the danger is real, the window is closing, and the response is to build international governance structures now, before crossing the event horizon. The most institutionally credible voice in the risk camp, and the most constructive.

Where this leaves you

Your Footing

You now have more of a map than most people who are talking about AI in public. The frames on this site are the actual building blocks of the discourse: in policy papers, op-eds, boardrooms, and campaign speeches. Knowing which frame is doing work in an argument lets you ask better questions about it.

There is one gap in the discourse that Stuart Russell named and no one has yet closed: we have no good shared story about what a positive AI future actually looks like. Our dominant cultural images are either utopian in a thin way (AI solves everything) or dystopian (Terminator, The Matrix, Wall-E's atrophied humans). What's missing is a widely shared image of human beings flourishing alongside powerful AI.

Part of what makes the race dynamic so hard to interrupt is that no one is quite sure what we're racing toward, beyond "more capability."

You already have a role: as a voter, a worker, a researcher, a parent, a voice in your community. What you do with a clearer map is up to you.