Here is a tweet:
I think this is a great line of inquiry, and lets me explain my stance with regard to superintelligence quite concisely.
We do not have a good systems of ethics that works with extreme scenarios. While it is possible to build one, this is a massive undertaking that will take decades at least. Our systems of ethics usually only focus on a couple of moral intuitions at once, and try to help with our decision making in select practical circumstances. They are merely okay enough to deal with the moral decisions we have to make in the many walks of daily life.
Superintelligent agents makes for a quite a extreme scenario. None of our morals and ethical systems know how to deal with this. Speaking of “AI alignment” in this context is quite shaky.
In practice, the best our ethics do in that case is “Maybe don’t do things that have too large of an impact, concentrate power too much, or from which we can not reliably recover”. This goes by many names, from the practical Risk Reduction and the Precautionary Principle, to the more philosophical Chesterton’s Fence and Conservatism, or the political Separation of Powers and Competition Law.
Some powerful technologies and inventions, by their very existence, even without any agency, have a large impact. Like nukes. In the case of superintelligent agents, they’re very much so in the realm of “things that have too large of an impact, concentrate power too much, or from which we can not reliably recover”. Thus, we should not build them.
This is a short post. I just liked the chance of sharing a more compact presentation of a natural intuition.
Cheers!
I like ur premises (we don't have ethics for extremes && super-intelligence is an extreme) but don't think they imply ur conclusion (don't enter extreme territory) any more than its opposite (do enter extreme territory).
I think there are other reasons not to enter extreme territory. eg if we want to preserve current ways of life.