Lying is Cowardice, not Strategy
The more important the topic, the worse it is. And human's extinction is pretty important.
(Co-written by Connor Leahy and Gabe)
We have talked to a whole bunch of people about pauses and moratoriums. Members of the AI safety community, investors, business peers, politicians, and more.
Too many claimed to pursue the following approach:
It would be great if AGI progress stopped, but that is infeasible.
Therefore, I will advocate for what I think is feasible, even if it is not ideal.
The Overton window being what it is, if I claim a belief that is too extreme, or endorse an infeasible policy proposal, people will take me less seriously on the feasible stuff.
Given this, I will be tactical in what I say, even though I will avoid stating outright lies.
Consider if this applies to you, or people close to you.
If it does, let us be clear: hiding your beliefs, in ways that predictably leads people to believe false things, is lying. This is the case regardless of your intentions, and regardless of how it feels.
Not only is it morally wrong, it makes for a terrible strategy. As it stands, the AI Safety Community itself can not coordinate to state that we should stop AGI progress right now!
Not only can it not coordinate, the AI Safety Community is defecting, by making it more costly for people who do say it to say it.
We all feel like we are working on the most important things, and that we are being pragmatic realists.
But remember: If you feel stuck in the Overton window, it is because YOU ARE the Overton window.
—
1. The AI Safety Community is making our job harder
In a saner world, all AGI progress should have already stopped. If we don’t, there’s more than a 10% chance we all die.
Many people in the AI safety community believe this, but they have not stated it publicly. Worse, they have stated different beliefs more saliently, which misdirect everyone else about what should be done, and what the AI safety community believes.
To date, in our efforts to inform, motivate and coordinate with people: People in the AI Safety Community publicly lying has been one of the biggest direct obstacles we have encountered.
The newest example of this is ”Responsible Scaling Policies”, with many AI Safety people being much more vocal about their endorsement of RSPs than their private belief that in a saner world, all AGI progress should stop right now.
Because of them, we have been told many times that we are a minority voice, and that most people in the AI Safety community (understand, Open Philanthropy adjacent) disagree that we should stop all AGI progress right now.
That actually, there is an acceptable way to continue scaling! And given that this makes things easier, if there is indeed an acceptable way to continue scaling, this is what we should do, rather than stop all AGI progress right now!
Recently, Dario Amodei (Anthropic CEO), has used the RSP to frame the moratorium position as the most extreme version of an extreme position, and this is the framing that we have seen used over and over again. ARC mirrors this in their version of the RSP proposal, describing itself as a “pragmatic middle ground” between a moratorium and doing nothing.
Obviously, all AGI Racers use this against us when we talk to people.
There are very few people that we have consistently seen publicly call for a stop to AGI progress. The clearest ones are Eliezer’s “Shut it All Down” and Nate’s “Fucking stop”.
The loudest silence is from Paul Christiano, whose RSPs are being used to safety-wash scaling.
Proving me wrong is very easy. If you do believe that, in a saner world, we would stop all AGI progress right now, you can just write this publicly.
When called out on this, most people we talk to just fumble.
2. Lying for Personal Gain
We talk to many people who publicly lie about their beliefs.
The justifications are always the same: “it doesn’t feel like lying”, “we don’t state things we do not believe”, “we are playing an inside game, so we must be tactical in what we say to gain influence and power”.
Let me call this for what it is: lying for personal gain. If you state things whose main purpose is to get people to think you believe something else, and you do so to gain more influence and power: you are lying for personal gain.
The results of this “influence and power-grabbing” has many times over materialised with the safety-washing of the AGI race. What a coincidence it is that DeepMind, OpenAI and Anthropic are all related to the AI Safety community.
The only benefit we see from this politicking is the people lying gain more influence, while the time we have left to AGI keeps getting shorter.
Consider what happens when a community rewards the people who gain more influence by lying!
—
So many people lie, and they screw not only humanity, but one another.
Many AGI corp leaders will privately state that in a saner world, AGI progress should stop, but they will not state it because it would hurt their ability to race against each other!
Safety people will lie so that they can keep ties with labs in order to “pressure them” and seem reasonable to politicians.
Whatever: they just lie to gain more power.
“DO NOT LIE PUBLICLY ABOUT GRAVE MATTERS” is a very strong baseline. If you want to defect, you need a much stronger reason than “it will benefit my personal influence, and I promise I’ll do good things with it”.
And you need to accept the blame when you’re called out. You should not muddy the waters by justifying your lies, covering them, telling people they misunderstood, and try to maintain more influence within the community.
We have seen so many people be taken in this web of lies: from politicians and journalists, to engineers and intellectuals, all up until the concerned EA or regular citizen who wants to help, but is confused by our message when it looks like the AI safety community is ok with scaling.
Your lies compound and make the world a worse place.
There is an easy way to fix this situation: we can adopt the norm of publicly stating our true beliefs about grave matters.
If you know someone who claims to believe that in a saner world we should stop all AGI progress, tell them to publicly state their beliefs, unequivocally. Very often, you’ll see them fumbling, caught in politicking. And not that rarely, you’ll see that they actually want to keep racing. In these situations, you might want to stop finding excuses for them.
3. The Spirit of Coordination
A very sad thing that we have personally felt is that it looks like many people are so tangled in these politics that they do not understand what the point of honesty even is.
Indeed, from the inside, it is not obvious that honesty is a good choice. If you are honest, publicly honest, or even adversarially honest, you just make more opponents, you have less influence, and you can help less.
This is typical deontology vs consequentialism. Should you be honest, if from your point of view, it increases the chances of doom?
The answer is YES.
a) Politicking has many more unintended consequences than expected.
Whenever you lie, you shoot potential allies at random in the back.
Whenever you lie, you make it more acceptable for people around you to lie.
b) Your behavior, especially if you are a leader, a funder or a major employee (first 10 employees, or responsible for >10% of the headcount of the org), ripples down to everyone around you.
People lower in the respectability/authority/status ranks do defer to your behavior.
People outside of these ranks look at you.
Our work toward stopping AGI progress becomes easier whenever a leader/investor/major employee at Open AI, DeepMind, Anthropic, ARC, Open Philanthropy, etc. states their beliefs about AGI progress more clearly.
c) Honesty is Great.
Existential Risks from AI are now going mainstream. Academics talk about it. Tech CEOs talk about it. You can now talk about it, not be a weirdo, and gain more allies. Polls show that even non-expert citizens express diverse opinions about super intelligence.
Consider the following timeline:
ARC & Open Philanthropy state in a press release “In a sane world, all AGI progress should stop. If we don’t, there’s more than a 10% chance we will all die.”
People at AGI labs working in the safety teams echo this message publicly.
AGI labs leaders who think this state it publicly.
We start coordinating explicitly against orgs (and groups within orgs) that race.
We coordinate on a plan whose final publicly stated goal is to get to a world state that, most of us agree is not one where humanity’s entire existence is at risk.
We publicly, relentlessly optimise for this plan, without compromising on our beliefs.
Whenever you lie for personal gain, you fuck up this timeline.
When you start being publicly honest, you will suffer a personal hit in the short term. But we truly believe that, coordinated and honest, we will have timelines much longer than any Scaling Policy will ever get us.