Where we already are
Now
The harm is not hypothetical.
No speculation required. The impact is already here: in labor markets, data extraction, and who gets to imagine what AI is for.
Why now
The Race
No one chose this, and no one controls it.
AI development is accelerating because each actor fears being left behind. The economics make that fear rational.
Emad Mostaque's three-path framework
The race doesn't have one destination. Mostaque identifies three possible outcomes. The next decade will resolve which one.
A handful of corporations capture AI and use it to consolidate permanent power. The rest of humanity is dependent, surveilled, and effectively governed by private actors with no democratic accountability.
Nation-states and blocs develop competing AI systems. The world splinters into incompatible digital spheres — dangerous, unstable, prone to conflict. No single actor wins; everyone loses coherence.
Open-source development keeps AI accessible and accountable. Humans remain in meaningful relationship with the technology. This is the path Mostaque advocates — and the one he believes requires deliberate choice to reach.
Other thinkers, other destinations
Mostaque's three paths aren't the only map. Other thinkers in this project describe different destinations — or question whether the race framing points to the right destination at all.
AGI arrives around 2035. The transition is real, uneven, and difficult — jobs lost, institutions strained — but navigable if tracked and governed. Not a clean destination: an extended adjustment.
Not a destination but a structure: an international treaty with safety standards, inspection regimes, and liability — modeled on nuclear regulation and the Montreal Protocol. Which future you reach depends on whether this gets built first.
The race doesn't resolve cleanly. Capability distributes — open-source, state actors, smaller labs. No single entity captures AI. The landscape stays pluralistic, competitive, and ungoverned. Messier than any of Mostaque's three.
The race to AGI is the wrong question. Small, task-specific models trained on local data would serve most of the world's actual needs — without concentrating power in the same way. This path requires rejecting the premise that general AI is the goal.
This loop is a competitive structure. Individual good intentions don't change the incentive. The people building AI include many who genuinely want good outcomes. They are still caught in a race no one chose and no one controls.
The race dynamic is probably the single most important fact for a citizen trying to understand why governance is failing. It explains why "can't we just slow down?" keeps running into the same wall. Slowing down requires every major actor to slow down simultaneously — which requires international coordination the world has not yet found the will to attempt.
It has been done before. The Montreal Protocol phased out CFCs. Chernobyl produced an international nuclear safety regime. These precedents exist. They also required shared recognition of the problem first, which is what this site is trying to support.
The question everyone is theorizing or resisting
The Horizon: or a Distraction?
Some call it P(doom). Others say that's the wrong question.
The same end-state sits at the center of the most serious technical arguments in the field, and of the most serious critiques of those arguments. Where thinkers land on the spectrum reveals as much about their frame as their math.
He takes the risk seriously enough to be on the left side of this spectrum — but unlike Yudkowsky and Yampolskiy, he believes the alignment problem is solvable. His argument: the danger is real, the window is closing, and the response is to build international governance structures now, before crossing the event horizon. The most institutionally credible voice in the risk camp, and the most constructive.
Where this leaves you
Your Footing
You now have more of a map than most people who are talking about AI in public. The frames on this site are the actual building blocks of the discourse: in policy papers, op-eds, boardrooms, and campaign speeches. Knowing which frame is doing work in an argument lets you ask better questions about it.
There is one gap in the discourse that Stuart Russell named and no one has yet closed: we have no good shared story about what a positive AI future actually looks like. Our dominant cultural images are either utopian in a thin way (AI solves everything) or dystopian (Terminator, The Matrix, Wall-E's atrophied humans). What's missing is a widely shared image of human beings flourishing alongside powerful AI.
Part of what makes the race dynamic so hard to interrupt is that no one is quite sure what we're racing toward, beyond "more capability."
You already have a role: as a voter, a worker, a researcher, a parent, a voice in your community. What you do with a clearer map is up to you.