12 Comments
User's avatar
Brenda kniceley's avatar

Was this written by a human? Or the real question I ask " Was this written by AI??

Gabe's avatar

I don't know if it's a nice joke, a compliment, or something else :D

agentofuser's avatar

I like the delegating vs deferring angle, and how most of human agency is given up voluntarily by humans in exchange for convenience, rather than hacked, persuaded, coerced, etc. out of them.

Gabe's avatar
Oct 9Edited

Yup, I think most people focusing on hacking / persuasion / coercion are a bit ungrounded.

People have already given up a lot of their agency.

Alvin Ånestrand's avatar

Nice story! Thanks for including explanations for the narrative decisions, that was very helpful.

I think your "optimistic" assumptions might increase the likelihood of things going terribly wrong though. We might have a better chance of surviving if dangerously sophisticated AIs arrive when humans and institutions still have enough agency to coordinate and intervene.

The assumptions also appear to make it less likely that we get clear warning shots, like large-scale incidents that are clearly AI-attributable, which could happen if capabilities and misalignment increase faster than safety methods improve.

I'm definitely not certain though.

Gabe's avatar
Oct 8Edited

Thanks for your comment :)

-

> Thanks for including explanations for the narrative decisions, that was very helpful.

Glad to read! As a reader, I always wanted this when reading someone else's story haha

-

> We might have a better chance of surviving if dangerously sophisticated AIs arrive when humans and institutions still have enough agency to coordinate and intervene.

The optimistic part of assumptions are about progress to ASI taking more time, I'm not sure how it relates to humans and institutions

-

> The assumptions also appear to make it less likely that we get clear warning shots, like large-scale incidents that are clearly AI-attributable, which could happen if capabilities and misalignment increase faster than safety methods improve.

I don't think this is realistic, because companies optimise quite hard against warning shots and clearly attributable large-scale incidents.

We already had to face large-scale AI incidents because of Social Media, but Big Tech has been working hard so it would not be attributed to them.

Alvin Ånestrand's avatar

Good points. I still think this scenario understimates the damage from incidents though.

Regulation on open-source AI is very limited. Misuse like hacking and help with bioweapons appears likely to occur before such regulation improves.

Hackers can attack those that are slow in adopting AIs to improve their security.

Not all AI developers are equally careful (consider "MechaHitler"). Some may fail in controlling much weaker systems than GPT-Ω.

AI companies in general do not appear very competent at safety to me. Though they may be good at deflecting blame and lobbying.

At least some misuse incidents and accidents would be traced to AIs. Though it might still not be enough.

Gabe's avatar

> AI companies in general do not appear very competent at safety to me. Though they may be good at deflecting blame and lobbying.

Yup, this is what I meant here and in the story.

I mean it at a much grander scale though: they are controlling the shape of the debate, the frame of the conversation, the words being used, etc.

Alvin Ånestrand's avatar

To me it seems like major incidents will occur before AI companies would have a chance to control so much of the conversation.

Maybe I'm underestimating how soon AI could affect online discourse and trust in news, and government support could help before AI companies gain such influence themselves🤔

Gabe's avatar

They have already done a lot.

I possibly should write a stand-alone article about this topic specifically?

Like, "The state of AI lobbying in 2025" or something like this?

Alvin Ånestrand's avatar

Sounds like a good idea to me, I haven't yet encountered a good overview of lobbying initiatives, and how they differ between companies.

LightGraffiti's avatar

Choosing to be vegan helps with understanding all this AIhumanity as the final act