← Sites
Tools Experiments

The problem this site addresses

There is no neutral way to describe what AI is or does. Call it a tool and you imply it has no agency of its own. Call it an existential risk and you imply the threat comes from the technology, not from the people controlling it. Call it a colonial force and you locate the problem in power structures, not in code. Each description is a frame — a set of assumptions that makes some things visible and puts others in shadow.

Most public discourse about AI is not transparent about its frames. A politician who calls AI "just a tool" is making an argument, not a neutral observation. A researcher who calls it an existential risk is making a different argument. A journalist who calls it a colonial force is making a third. The choice of frame determines what evidence counts, what solutions look like, and who gets to participate in the conversation.

The core claim: Most people are already inside a discourse about AI without knowing it. Governance, policy, and cultural response are being shaped by a handful of stories — and ordinary people have almost no foothold to evaluate or push back on them. This site tries to give them one.

The sources

The content draws from two rounds of transcript research: extended podcast conversations with thirteen thinkers, selected for range across technical backgrounds, institutional positions, and analytical frameworks. The conversations were analyzed for the frames each thinker deploys, where they agree, and where they sharply diverge.

The thirteen voices, their roles, and their source conversations:

Emad Mostaque
Stability AI founder · Know Thyself
Roman Yampolskiy
AI safety researcher, U of Louisville · Know Thyself
Raschka & Lambert
ML researchers and educators · Lex Fridman
Karen Hao
Journalist, author of Empire of AI · Diary of a CEO
Tristan Harris
Center for Humane Technology · Diary of a CEO
Stuart Russell
UC Berkeley, AI textbook author · Diary of a CEO
Timnit Gebru
DAIR Institute founder · The Maybe
Eliezer Yudkowsky
MIRI co-founder · Modern Wisdom
Geoffrey Hinton
Nobel Prize 2024, Google DeepMind · Diary of a CEO
Yoshua Bengio
Mila founder, Turing Award · Diary of a CEO
Dario Amodei
Anthropic CEO · Dwarkesh Patel
Jack Clark
Anthropic co-founder, Import AI · Ezra Klein Show
Demis Hassabis
Google DeepMind CEO · Unknown Podcast

What a frame is — and isn't

A frame is not a lie or a bias to be corrected. It's a lens: a coherent set of assumptions that makes certain questions askable and others invisible. The researcher who treats AI as an alignment problem is not wrong — but the frame makes power dynamics harder to see. The journalist who treats it as a colonial force is not wrong either — but that frame makes the technical alignment problem harder to address.

All nine frames on this site are in active use by serious people. Each illuminates something true about the situation. Understanding them as frames — rather than as competing claims about objective reality — is the analytical move this site is trying to support.

Editorial stance

This project holds a light editorial position: the discourse is not neutral, and we are not going to pretend otherwise. But we are not recruiting. The goal is to surface the frames in play — including the assumptions baked into each one — and give visitors enough ground to form their own views.

We do not tell people what to think. We show them that they are already being told.

The thinkers on this site were not selected to represent a balanced scorecard. They were selected because they are articulate representatives of distinct positions and because their conversations revealed the stakes clearly. Some hold views that conflict sharply with others. That tension is preserved rather than smoothed over.

Who this is for

The intended audience is the general public: people who encounter AI in news coverage, in tools they use, in policy debates — and who don't yet have a framework for evaluating the claims made about it. Not researchers, not engineers. Voters, advocates, workers, parents — anyone who wants to think more clearly about what is actually being decided in their name.

A Living Fiction project

The AI Frame Problem is part of Living Fiction, a broader project mapping how speculative fiction, theory, and media intersect with the AI era. The sister project, Story Lineage, traces how fiction anticipated the present moment. This site traces who is narrating what comes next — and what assumptions their narration carries.