9 Comments

All points excellent: indeed, unimpeachable. Techno-optimism as a cope for disliking coordination is especially brilliant. I have updated my model of this phenomenon accordingly.

Expand full comment

I often find that I have a separate kind of implied disagreement -- implied because I agree with everything Yudkowsky (and anyone Yudkowsky-adjacent) says on the matter. Despite this I am more wearied than worried. Why?

First an aside. Seems to me that intelligence is a smaller portion of the human success story than anyone in the AI debate scene supposes. There's an old saying that bits are easy, atoms are hard. Human tech is a pyramid, the base of which is shrouded by the mists of prehistory. No single human who advanced agriculture or bow-hunting or early metalworking or charioteering or any early tech even understood let alone could have replicated by force of intelligence the cultural-tech inheritance upon which he was building. Most of what we do to keep moving forward is done in ignorance of the complex systems that undergird whatever local portion of the edifice we somewhat understand. I don't just mean that a surgeon looking at imaging before beginning his work can't build an MRI (though of course this is true), but more that such a surgeon couldn't accurately guess within a couple orders of magnitude the complexity of such a tool. We seem to do so much with intelligence because of uncountable ages of coordination. Is it possible that enough machine intelligence leapfrog past all this and create diamondoid nanomachines? Yes of course. I'm granting all the Yudkowskian arguments about superintelligence, and am not trying to handwave its capabilities. Two questions:

How much smarter is that? How easy is it to get there from? A lot, and not very.

Looks to me as though most thinkers are somehow a full order of magnitude off on both: the former is 10x higher than we'd hope, and the latter is 1/10th what we'd want to believe. Yes, Claude is already superhuman by many metrics. But how superhuman would a machine have to be? Probably vastly more than we poor apes can build out of sand -- especially if we're right now at the zenith of our power. The limits-to-growth alarmists are right that we passed peak cheaply-extractable resources decades ago, and the fertility doomers are right in calling the peak human capital. We came, not close, but as close as we could with our old railways and cobalt mines and quartz crucibles and hydrofracking rigs. A century from now will look more like the past than it will look like the future, and nobody will remember our hopes and fears.

Expand full comment

Sounds like you aren't actually buying the Yudkowskian foom argument. Humans only need to make one AI that is as good at making better AI as Ilya Sutskever. Given computer speed and software cloning, rapid recursive self-improvement then kicks in. We're not far off this.

Expand full comment

With all possible respect, sounds like you're not hearing my actual point.

Expand full comment

Au contraire: you aren't hearing me. You say "Probably vastly more than we poor apes can build out of sand" - I'm saying that is missing the point. For foom to happen, we only need to build an AI that can make better AI as good as Ilya. We don't need to build superintelligence ourselves. Ilya-level AGI will build ASI. Do you think Ilya-level AGI is impossible? Why?

Expand full comment

Now I'm definitely hearing you and you are definitely missing the entire point. The word 'only' is doing all of the heavy lifting there for you. Yeah, obviously the only thing that would need to happen in order to decisively prove me wrong is the thing that I'm saying isn't going to happen, correct.

Expand full comment

"Mental healing at scale. The world is a very traumatising place. Our default coping mechanisms make us very un-ambitious. I believe there is an arbitrage in massively scalable therapy, that makes people more happy, more productive and more prone to coordination."

I would like to hear more about this. Why do you believe this? What would this therapy look like? How could it scale when a lot of skilled-therapists struggle to transfer their same tacit-knowledge to more than a couple of people at a time? What's the cheapest/fastest possible test of your idea?

Expand full comment

This is good, and it will get better as the "Common Counter Arguments" is expanded. Here are two other common counter argument:

* "AI will not be selfish and power seeking. You expect AI to have those traits only because you are fooled by the fact that all of your prior experience with intelligence has been with naturally evolved biological intelligences. Those intelligences are selfish and power seeking because natural selection favors these traits. But AI, in contrast, is created by a process that favors its helping its creators."

* "The more intelligent an AI is, the more certain we can be that it will discover that we have moral value. Even now, if an entire *species* of ant were going to be destroyed by the building of a tower, that would probably be enough to halt construction on the tower. Intelligent people know that it's important to avoid harming other animals. A superintelligent AI would be even farther along on this process of moral discovery, so we can be confident that it would work to preserve us and our freedom."

Expand full comment

1. Evolution will also apply to AIs (see: https://time.com/6283958/darwinian-argument-for-worrying-about-ai/). See also: convergent instrumental goals.

2. Some humans might think that way about ants, but the vast majority of building projects aren't stopped to protect ants nests. Also, if we only protect ants at a species level, why wouldn't an AI following that just protect us at the species level? A few breeding pairs in zoos and the rest in digital storage. Sounds great, doesn't it?

Expand full comment