Rethinking Superintelligence
Reframing superintelligence: beyond individual geniuses to collective intelligence.
Low Expectations
I often discuss extinction risks from superintelligence.
For the most part, people who deny the risks simply do not agree on how powerful superintelligence can realistically be.
This makes sense!
Wikipedia defines superintelligence as "a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds."
"Far surpassing that of the brightest and most gifted human minds" is not that high of a threshold. A single super-Einstein could be called superintelligence in that frame.
And if this is the core example of superintelligence, then… yeah. Super-Einstein sure can seem dangerous, but not that dangerous. Super-Einstein doesn't defeat the rest of humanity combined and surely is not powerful enough to kill everyone as a side-effect.
Naive Understanding of Intelligence
A single person will not outsmart the rest of humanity, much less outdo it.
It feels like this comes from intelligence being limited: no matter how smart a single human is, there's no way they can have enough intelligence to surpass the rest of humanity.
Sure, one could conceive some type of intelligence that transcends that of humans. But this feels like Sci-Fi, not granted. How can I, Gabe, be so confident that such intelligence is possible?
—
This confusion comes from a naive conception of intelligence.
There is a core intuition hidden in the "Low Expectations" section of this post. The intuition is that humanity is much more intelligent as a collective than any single person.
People fundamentally think of intelligence as human abilities. Language, logic, tool use, complex thinking, planning, and problem-solving.
But these characterisations are downstream of intelligence. They come from a myopic drunkard's understanding of intelligence, looking at what humans can do that animals and machines can not. Those definitions are myopic because we do not know better; it seems we do not have any better example of intelligence than that.
This is wrong. We already have many examples of intelligence on Earth that are far smarter than any human.
Collective Intelligence
No single human knows the entire blueprint and supply chain of the latest Samsung Phone. This knowledge does not reside in the brain of a single human.
As individuals, we are too dumb and limited to build such things by ourselves.
Yet, an intelligent entity put its intelligence into designing, producing and marketing that phone: Samsung.
—
A serious conception of intelligence must be able to describe collective intelligence. When thinking of the smartest existing entities, a serious conception of intelligence must think more about the biggest groups that manage to coordinate to build the greatest things than the smartest individuals.
A conception of intelligence that misses this is not suffering from a minor blindspot. It suffers from a gaping hole. Let alone superintelligence, this conception of intelligence can not even analyse how research, markets, organisations or public debates work.
—
Public debates are important.
They are not important because they make the participants smarter or let them converge to some truth.
They are important because they improve the overall quality of discussions.
Everyone gets access to the best arguments and can start thinking of better arguments on top of the best ones. People can start thinking independently or having debates and conversations of their own. Different people will have different ideas, and some of them will be good and should bubble up as people have more and more public debates.
No individual needs to know all the right arguments. What is needed is for the right arguments to percolate everywhere.
In other words, public debates are about making the overall group smarter. This is the vision behind The Marketplace of Ideas.
—
Another example of collective intelligence is research.
Many ideas are "discovered" independently at the same time. The reason why is that ideas are not single atoms but complex graphs spread across many individuals. The worker who puts the last brick on a building did not build it. Ideas are not discovered by individuals as much as they are completed. Even then, they are rarely completed. They will be clarified, refined, built upon, and spread over time.
While the idea of maths is old, our modern understanding is much more refined than what any one of us could have come up with by ourselves. Our modern understanding is the result of humanity's thinking about it.
We do not stand on the shoulders of giants; we are (part of) the giants. As thinkers, we are but the links of a chain that comes from the past and extends into the future.
PhDs, lab technicians, teachers, book editors, Terrence Tao, reviewer #4. For better or worse, they are all part of our giant machine that coalesces, crystallises and propagates knowledge everywhere.
Back to Superintelligence
When thinking about the heights of intelligence, I recommend thinking about collective intelligence rather than mere individuals. We should consider how Academia works rather than any individual genius who was awarded a Nobel or Turing prize.
I suggest thinking of dangerous superintelligence not as "intelligence far beyond the brightest humans" but as "intelligence far beyond that of humanity". In other words, dangerous superintelligent entities are not mere super-Einsteins; they are entities that can outsmart humanity.
I believe that this is a major source of disagreement. Accelerationists simply do not believe that such artificial superintelligence at this level can be built anytime soon.
Their beliefs are that "current AI research will plateau long before it reaches even existing levels of intelligence" and that, thus, "AI progress will be slow and controlled enough that we always keep AIs aligned and integrated, such that our collective intelligence improves alongside that of AIs".
I happily grant that extinction risks would be much lower if it was not likely or possible to build AI systems smarter than humanity.
(For the record, I am overreaching when I say that the above are beliefs of accelerationists. I have never seen such beliefs explicitly stated without me asking clarifying questions. This is more a formalisation of intuitions that I have extracted from many conversations than clearly stated core beliefs.)
—
Unfortunately, many people worried about AI Extinction Risks have been captured by accelerationist companies.
Their belief is "As long as it is my group who builds the superintelligent system that can outsmart rival humanity, things will be ok".
And so that their group gets power, they lie. A common lie is to downplay what they are building.
In private, AGI is a godlike entity that can build Dyson spheres and make governments obsolete. In public, AGI is just chatbots or robots that are as dextrous as humans.
Words are contextual. It makes sense that different words are used in different contexts. Unfortunately, this has the side-effect of confusing everyone else. Big Tech can publicly say they want to build AGI, knowing that different people will hear "Chatbots" or "Eldritch Entities" depending on their culture. In private, they can then pick the interpretations that help them best.
To not seem stupid, too nerdy or dangerous, they will avoid at all costs clearly stating, "Oh yeah, by the way! When we say building AGI, we mean that we will build entities smarter than humanity, that can topple all governments".
—
A way to defend against that is to craft a new word specifically describing intelligence higher than that of all of humanity.
But 1: I am not very good at naming things; and 2: before starting to reify concepts with names, I want to make them clearer and more salient.
That way, we can have more interesting debates where we discuss our core disagreements. For instance, do we believe that building such intelligence is realistic?
This would be much better than talking past each other and constantly being pwnd by Big Tech PR and lobbying.
Conclusion
I will write more about the heights of superintelligence.
I specifically want to write about the different dimensions along which human intelligence can be scaled to superintelligence. I expect this will be of interest to many people.
Cheers, and have a nice day!
""Oh yeah, by the way! By AGI, we mean that we plan to believe entities smarter than humanity, the kind that kind topple all governments"." - Something's wrong with this sentence.
Thanks this was an interesting read.
Especially the parts about public debates improving the overall landscape of ideas, and seeing superintelligence as analogous to the entirety of humanity.