The Spectre haunting the "AI Safety" Community
A consistent failure of informing lawmakers and laypeople about extinction risks.
I’m the originator behind ControlAI’s Direct Institutional Plan (the DIP), built to address extinction risks from superintelligence.
My diagnosis is simple: most laypeople and policy makers have not heard of AGI, ASI, extinction risks, or what it takes to prevent the development of ASI.
Instead, most AI Policy Organisations and Think Tanks act as if “Persuasion” was the bottleneck. This is why they care so much about respectability, the Overton Window, and other similar social considerations.
Before we started the DIP, many of these experts stated that our topics were too far out of the Overton Window. They warned that politicians could not hear about binding regulation, extinction risks, and superintelligence. Some mentioned “downside risks” and recommended that we focus instead on “current issues”.
They were wrong.
In the UK, in little more than a year, we have briefed +150 lawmakers, and so far, 112 have supported our campaign about binding regulation, extinction risks and superintelligence.
The Simple Pipeline
In my experience, the way things work is through a straightforward pipeline:
Attention. Getting the attention of people. At ControlAI, we do it through ads for lay people, and through cold emails for politicians.
Information. Telling people about the situation. For laypeople, we have written a lot, including The Compendium (~a year before If Anyone Builds It, Everyone Dies). For politicians, we brief them in person.
Persuasion. Getting people to care about it.
Action. Getting people to act on it.
At ControlAI, most of our efforts have historically been on steps 1 and 2. We are now moving to step 4!
If it seems like we are skipping step 3, it’s because we are.
In my experience, Persuasion is literally the easiest step.
It is natural!
People and lawmakers obviously do care about risks of extinction! They may not see how to act on it, but they do care about everyone (including themselves) staying alive.
—
Attention, Information and Action are our major bottlenecks.
Most notably: when we talk to lawmakers, most have not heard about AGI, ASI, Recursive Self Improvement, extinction risks and what it takes to prevent them.
This requires briefing them on the topic, and having some convenient information. The piece of evidence that I share the most is the Center for AI Safety’s statement on extinction risks, signed by CEOs and top academics. But it’s getting old (almost 3 years now) and the individuals involved have been less explicit since then.
There are arguments in longer form, like the book If Anyone Builds It Everyone Dies. But getting lawmakers to read them requires grabbing their Attention for an even longer duration than for a briefing.
Finally, once lawmakers are aware of the risks, it still takes a lot to come up with concrete actions they can take. In a democracy, most representatives have a very limited amount of unilateral power, and thus we must come up with individualised Actions for each person to take.
—
I contend that AI Policy Orgs should focus on
1) Getting the Attention of lawmakers
2) Informing them about the ASI, extinction risks and the policy solutions.
Until this is done, I believe that AI Policy Orgs should not talk about “Overton Window” or this type of stuff. They do not have the standing to do so, and are self-defeatingly overthinking it.
I recommend to all these organisations to take great steps to ensure that their members mention extinction risks when they talk to politicians.
This is the point behind ControlAI’s DIP.
Eventually, we may get to the point where we know that all politicians have been informed, for instance through their public support of a campaign.
Once we do, then, I think we may be warranted in thinking about politics, of “practical compromises” and the like.
The Spectre
When I explain the Simple Pipeline and the DIP to people in the “AI Safety” community, they usually nod along.
But then, they’ll tell me about their pet idea. Stereotypically, it will be one of:
Working on a technical “safety” problem like evals or interpretability. Problems that are not the bottleneck in our world where AI companies are racing to ASI.
Doing awareness, but without talking about extinction risks or their political solutions, because it’s easier to not talk about it.
Coincidentally, these ideas are about not doing the DIP, and not telling lay people or lawmakers about extinction risks and their policy mitigations.
—
Let’s consider how many such coincidences there are:
If a capitalist cares about AI extinction risks, they have Anthropic they can throw money at.
If a tech nerd cares about AI extinction risks, they can work at the “AI Safety” department of an AI corporation.
If a tech nerd cares about AI extinction risks, and they nominally care about Conflicts of Interests, they can throw themselves at an evals org, whether it is a public AISI, or a private third-party evaluator organisation.
If a policy nerd cares about AI extinction risks, they can throw themselves at one of the many think tanks who ~never straightforwardly mention extinction risks to policy makers.
If a philanthropist cares about AI extinction risks, they can fund any of the above.
This series of unfortunate coincidences is the result of what I call The Spectre.
The Spectre is not a single person or group. It’s a dynamic that has emerged out of many people’s fears and unease, the “AI Safety” community rewarding too-clever-by-half plans, the techno-optimist drive to build AGI, and the self-interest of too many people interwoven with AI Corporations.
The Spectre is an optimisation process that has run in the “AI Safety” community for a decade.
In effect, it consistently creates alternatives to honestly telling lay people and policy makers about extinction risks and the policies needed to address them.
—
We have engaged with The Spectre. We know what it looks like from the inside.
To get things going funding-wise, ControlAI started by working on short-lived campaigns. We talked about extinction risks, but also many other things. We did one around the Bletchley AI Safety Summit, one on the EU AI Act, and one on DeepFakes.
After that, we managed to raise money to focus on ASI and extinction risks through a sustained long-term campaign!
We started with the traditional methods. Expectedly, the results were unclear and it was hard to know how instrumental we were to the various things happening around us.
It was clear that the traditional means were not efficient enough and would not scale to fully and durably deal with superintelligence. Thus we finally went for the DIP. This is when things started noticeably improving and compounding.
For instance, in January 2026 alone, the campaign has led to two debates in the UK House of Lords about extinction risk from AI, and a potential international moratorium on superintelligence.
This took a fair amount of effort, but we are now in a great state!
We have reliable pipelines that can scale with more money.
We have good processes and tracking mechanisms that give us a good understanding of our impact.
We clearly see what needs to be done to improve things.
It’s good to have broken out of the grasp of The Spectre.
—
The Spectre is actively harmful.
There is a large amount of funding, talent and attention in the community.
But the Spectre has consistently diverted resources away from DIP-like honest approaches that help everyone.
Instead, The Spectre has favoured approaches that avoid alienating friends in a community that is intertwined with AI companies, and that serve the status and influence of insiders as opposed to the common good.
When raising funds for ControlAI, The Spectre has repeatedly been a problem. Many times, I have been asked “But why not fund or do one of these less problematic projects?” The answer has always been “Because they don’t work!”
But reliably, The Spectre comes up with projects that are plausibly defensible, and that’s all it needs.
—
The Spectre is powerful because it doesn’t feel like avoidance. Instead…
It presents itself as Professionalism, or doing politics The Right Way.
It helps people perceive themselves as sophisticated thinkers.
It feels like a clever solution to the social conundrum of extinction risks seeming too extreme.
While every alternative The Spectre generates is intellectually defensible, they all form a pattern.
The pattern is being 10 years too late in informing the public and the elites about extinction risks. AI Corporations got their head start.
Now that the race to ASI is undeniable, elites and lay audiences alike are hearing about extinction risks for the first time, without any groundwork laid down.
Conclusion
There is a lot to say about The Spectre. Where it comes from, how it lasted so long, and so on. I will likely write about it later.
But let’s start by asking what it takes to defeat it, and I think the DIP is a good answer.
The DIP is not clever nor sophisticated. By design, the DIP is Direct. That way, one cannot lose themselves in the many mazes of rationalisations produced by the AI boosters.
In the end, it works. 112 lawmakers supported our campaign in little more than a year. And it looks like things will only snowball from here.
Empirically, we were not bottlenecked by the Overton Window or any of the meek rationalisations people came up with when we told them about our strategy.
The Spectre is just that, a spectre, a ghost. It isn’t solid and we can just push through it.
—
If reading this, your instinct is to retort “But that’s only valid in the UK” or “But signing a statement isn’t regulation”, pause a little.
You have strong direct evidence that the straightforward approach works. It is extremely rare to get evidence that clear-cut in policy work. Instead of engaging with it and working through its consequences, you are looking for reasons to discount it.
The questions are fair: I may write a longer follow-up piece about the DIP and how I think about it. But given this piece is about The Spectre, consider why they are your first thoughts.
—
On this, cheers!


Right on target. When ordinary people inform their lawmakers about the situation and pressure them to act, they tend to take action! My own state-level representative drafted an AI safety bill after I got one meeting with her, and I'm just some guy! These are scary times, and the vibes are now publicly shifting. It's time to demand a global moratorium on frontier AI development.
Love the DIP, and love this. There are admittedly a large-ish number of people I might prefer didn't directly try to do this, but for likable and professional people I definitely think this is the move.
(Rafe just mentioned your name the other day, wonder if it was in relation to this)