9 Comments

"Mental healing at scale. The world is a very traumatising place. Our default coping mechanisms make us very un-ambitious. I believe there is an arbitrage in massively scalable therapy, that makes people more happy, more productive and more prone to coordination."

I would like to hear more about this. Why do you believe this? What would this therapy look like? How could it scale when a lot of skilled-therapists struggle to transfer their same tacit-knowledge to more than a couple of people at a time? What's the cheapest/fastest possible test of your idea?

Expand full comment

All points excellent: indeed, unimpeachable. Techno-optimism as a cope for disliking coordination is especially brilliant. I have updated my model of this phenomenon accordingly.

Expand full comment

This is good, and it will get better as the "Common Counter Arguments" is expanded. Here are two other common counter argument:

* "AI will not be selfish and power seeking. You expect AI to have those traits only because you are fooled by the fact that all of your prior experience with intelligence has been with naturally evolved biological intelligences. Those intelligences are selfish and power seeking because natural selection favors these traits. But AI, in contrast, is created by a process that favors its helping its creators."

* "The more intelligent an AI is, the more certain we can be that it will discover that we have moral value. Even now, if an entire *species* of ant were going to be destroyed by the building of a tower, that would probably be enough to halt construction on the tower. Intelligent people know that it's important to avoid harming other animals. A superintelligent AI would be even farther along on this process of moral discovery, so we can be confident that it would work to preserve us and our freedom."

Expand full comment