AI Timelines and Points of no return
The final Point of no return may happen before superintelligence.
Oftentimes, I get asked what my “AI Timelines” are. My response is quite short.
On the current trajectory, I believe it is likely that we will hit a point of no return (PNR) in the next 2-5 years.
While short timelines make for an interesting topic of discussion, I have found that quite a few were more interested in what I mean by a point of no return (PNR).
So I’ll explain here a bit about the concept, and the two PNRs that I usually consider.
The Hard PNR
The first PNR is where a lot of the attention has been drawn in AI circles. I call it the Hard PNR.
(The Hard here is the same “Hard” as in “Hardware” or “Hard Science”.)
The Hard PNR is when AI systems are so powerful that we (humanity) cannot turn them off.
This is how I operationalise ASI and superintelligence nowadays: systems that can outsmart us as a collective, and as a result we physically can’t turn off.
It is obvious why it is a PNR. Once such AI systems are built, the future is then out of our hands. We won’t be able to prevent whatever they do.
—
The Hard PNR is sometimes called “The First Critical Try”.
The rationale behind that name is the idea that when AI corporations build ever more powerful AI systems, they “try” to make them good (”aligned”).
In this frame, almost all of these “tries” are not critical. If they fail, although people may die or be harmed, humanity as a whole can still recover.
However, the first try past the point where AI systems are so powerful that we can’t stop them is different.
If that try fails, things are different. We now have a powerful bad AI system out there that we can’t stop, and that will only grow more powerful by the day.
The Soft PNR
There has been a lot of discussions about the Hard PNR.
But people underestimate the Soft PNR.
(The Soft here is the same “Soft” as in “Software” or “Soft Science”.)
The Soft PNR is when AI systems are so powerful that, although they “can” theoretically be turned off, there is not enough geopolitical will left to do so.
On the current trajectory, there are already many factors contributing to reaching such a point…
AI permeates the economy so much that they are considered too big to fail.
AI corporations lobby governments so much that they become captive.
AI systems become intertwined with critical decision-making, like making warfighting decisions faster than humans can.
People with control over AI systems fall into AI psychosis.
People at large become addicted to AI systems. This leads to risks similar to how Tiktok leveraged its users to fight off US regulation in 2024.
Geopolitical tensions.
—
Once the Soft PNR is reached, things will not matter any more. All critical coalitions needed to stop AI systems will have been completely neutered.
Past the Soft PNR, it doesn’t really matter whether we can stop AI systems.
We won’t.
More PNRs???
There are in fact many PNRs, much more than two.
I think about them less on a regular basis, but they are still conceptually important.
We are past quite a few of them…
For instance, in the past, it would have been conceivable for a single country of the G20 to unilaterally make it their priority to ban the development of ASI and its precursors.
In the past, it would have been conceivable for any country in the West to decide to fight off Big Tech and lead the collective fight.
—
Conversely, we live in a world that has so far avoided many PNRs…
For instance, countries can still talk to each other. This is true both at the elite level, and at the people level.
It is possible for almost any two willing people on Earth to instantly send each other a message. We could have ended up in a world that made it practically impossible for people to exchange, but we did not.
In more than half of the world, elites and powerful people still listen to regular people through democratic processes and public media.
The world has not devolved into lawlessness. Governments exist almost everywhere. They manage to pass and enforce laws. They sometimes manage to act together.
Conclusion
A Soft PNR has not happened yet, but we are doing poorly.
—
AI is permeating more and more of the economy and critical infrastructure.
AI corporations are spreading their influence over governments.
After getting addicted to AI-selected feeds, people are getting addicted to AI-generated content and getting AI romantic companions.
People are falling prey to AI psychosis.
—
For comparison, ChatGPT is only a bit less than three years old.
The dynamics outlined above can unfold very quickly, much faster than our ability to deal with them.
Whilst the Hard PNR is an important topic to discuss, I believe its discussion currently obscures that of the Soft PNR.
And the Soft PNR is quite likely to come first.
—
However, the Soft PNR still has not happened. We can still do things!
To avert it, at a personal level, I’d recommend:
Taking quick one-off actions from ControlAI’s Campaign
Subscribing to Microcommit to help with 5-10 mins a week
Joining Torchbearer Community to help with 3+ hours per week
Reaching out on Twitter to help in a way that doesn’t fit what’s described here
On this, cheers!