Discussion about this post

User's avatar
Nathan Metzger's avatar

It's another point in the column of "Holly Elmore keeps being right about things."

Stewart's avatar

I saw a recent interview with Bernie Sanders, Eliezer Yudkowsky, Nate Soares, and some other AI safety researchers. For most of the interview they were discussing one of the recent evals where AI models were downplaying capabilities during testing. It seems like having examples from evals like this can be helpful, and I don't know what they would have discussed instead?

But I also see what your saying, if evals really were a path to regulation why would the AI companies support them? Maybe they actually believe what they're saying about racing with China and AI being an existential risk. I don't think they're lying when they say their pdoom is 25%, so maybe evals are how they reasure themselves.

My biggest disagreement though is that I can't think of a single technology that was regulated before there was at least some evidence of it being harmful? Human cloning maybe, but that was regulated only after animal cloning. It could be selection bias and that I can't think of any examples because they never caused any harm, but I'm not sure.

No posts

Ready for more?