This is awesome. But I have a more personal question: I assume given your background at Conjecture/in AI safety you are pretty AGI pilled and have reasonably short AGI timelines. Why does confronting this general 21st century Eldritch matter? Why not just focus entirely (to the detriment of other things) on getting AGI right? This post seems to me another viewing angle of Scott Alexander's meditation on moloch. And his conclusion is basically we need this central figure to basically break moloch and that central figure is an aligned AGI.
Incredible writing, bringing so many concepts together. This certainly helps me frame many societal problems in my own head. I'm inspired to think more deeply and discuss with others.
This is awesome. But I have a more personal question: I assume given your background at Conjecture/in AI safety you are pretty AGI pilled and have reasonably short AGI timelines. Why does confronting this general 21st century Eldritch matter? Why not just focus entirely (to the detriment of other things) on getting AGI right? This post seems to me another viewing angle of Scott Alexander's meditation on moloch. And his conclusion is basically we need this central figure to basically break moloch and that central figure is an aligned AGI.
The reason for why we are failing to deal with the Eldritch is also upstream of why we are failing to build alignment schemes that scale to AGI.
Why?
Currently writing about it.
ㄒ卄乇 乇匚ㄖ几ㄖ爪ㄚ
lolol
I think you are doing a great job, and I’m inspired to help as well!
Also, when you put the different fonts, it was really funny
👍
Incredible writing, bringing so many concepts together. This certainly helps me frame many societal problems in my own head. I'm inspired to think more deeply and discuss with others.
I feel motivated :)