15 Comments
User's avatar
Nathan Metzger's avatar

Right on target. When ordinary people inform their lawmakers about the situation and pressure them to act, they tend to take action! My own state-level representative drafted an AI safety bill after I got one meeting with her, and I'm just some guy! These are scary times, and the vibes are now publicly shifting. It's time to demand a global moratorium on frontier AI development.

Andrea Miotti's avatar

That's awesome, great work contacting your lawmaker!

Could you share who the State rep is and the bill? In DM if you prefer.

Nathan Metzger's avatar

I'll DM since the bill isn't publicly published yet.

Connor Flexman's avatar

Love the DIP, and love this. There are admittedly a large-ish number of people I might prefer didn't directly try to do this, but for likable and professional people I definitely think this is the move.

(Rafe just mentioned your name the other day, wonder if it was in relation to this)

Tristan Trim's avatar

I really like and deeply agree with this post and with DIP, but also, it really pisses me off.

I started reading and thinking about ASI Alignment around 2013, and back then there was no chatgpt and telling laypeople you were concerned about AI leading to extinction often made them think you were actually insane, like, schizophrenic. The strength the spectre has in my heart is from the lived experience of trying to convince people I'm not insane. Of wondering if I really am insane.

For all I hate large AI companies for creating a dangerous AI capabilities race, I actually feel deep gratitude to them for making AI something that everyone wants to talk about. I hate how everyone talks about it as if it's a new thing that was just invented, but at least I can talk to people about it now.

And I have been working towards being a technical AI Alignment researcher for many years now. I think activities like DIP are greatly more important right now than any technical work, so much so that I ran PauseAI protests and outreach while stressed out of my mind finishing my BSc.

I want you to know how ridiculous of a change of career path "computer scientist" to "activist & policy maker" is. I do not think that is normal and I do not think that is a normal thing to ask of people. I agree with you that you should be asking that of people, and maybe my experience is rare enough that it doesn't make sense to talk about. Maybe most people are not involved with AI at a technical level and those that are really do not care about AI Alignment and Ord's precipice and the future of humanity.

I really am grateful for everything you are doing and for writing this post to direct attention to this problem, but all the same, reading this was quite aggravating for me.

Gabe's avatar

I don't have a thing to respond to your comment.

This is what you lived, how you felt and how you feel.

I just wanted to let you know that I acknowledge it, and to share my belief that in an ideal world, you would not have to go through to this.

Max Winga's avatar

Fwiw, I relate to your experience going down a similar path, albeit with less time in my career. I originally studied physics in university and wanted to work on technical / scientific research, and worked at Conjecture doing AI safety research. Upon being confronted with the fact technical solutions aren't the answer, I realized I needed to work on advocacy.

On the bright side of things, something I've enjoyed advocacy work – learning about how political action works, and applying a technical lens to optimizing it. There's a ton of low hanging fruit in this area!

Also, being someone who's clearly not your standard activist-type but who is so concerned by the state of things that you've pivoted to working on this is quite a strong angle to lean on.

The nice thing about short timelines is that while we don't have long to win, it won't be long until we do, at which point we can go back to whatever else we'd like, with a fascinating chapter about how we helped save humanity added to our stories :)

Harry Turnbull's avatar

I need courage sometimes to break through the learned fear of social exclusion when talking about extinction risk, but I keep reminding myself that whenever I’ve done it it’s never gone badly.

This article gives me more courage too.

Lucas Duarte's avatar

Thanks for this! It convinced me to take a serious look at what I can do from Brazil.

This is an electoral year, and as such, it feels like exactly the kind of window where contacting elected representatives could actually make a difference.

We also have the PL 2338/2023 (our AI regulation bill) currently moving through Congress. It's a risk-based framework inspired by the EU AI Act, but it doesn't address extinction risks or superintelligence at all. That gap seems like precisely the kind of thing the DIP approach could help fill.

The landscape here is, of course, very different, especially given the lack of our own foundational AI companies. That said, I think raising awareness about these risks among lawmakers here, especially those who are already engaging with AI regulation, could be valuable for the broader global ecosystem.

I've been in the ControlAI Discord for some time and have been engaging with Microcommit (that's actually how I found this post), but I would also love to hear your take on the value of the effort here.

Cheers!

Aidan Doherty's avatar

Do you have a plan in mind for how to use a similar persuasion campaign for the US Congress? I do suspect it may be more of an uphill battle given the dominance of large AI companies’ money and influence here. But I think it could still have an impact.

Stewart's avatar

“ If reading this, your instinct is to retort “But that’s only valid in the UK” or “But signing a statement isn’t regulation”, pause a little”

I just don’t understand why ControlAI is focussing so much on the UK? If it’s the US companies who are building the superintelligence, and the US government who are opposing all AI regulation, why focus on the UK and not the US?

Will Duncan's avatar

I love the fact that you're raising awareness. I also want to inject a little pragmatism here by pointing at the elephant in the room: the incentive to build AGI is the strongest attractor in existence in the economic and geopolitical landscape right now. It's the capture of all of labor on the timescale of years, and multiplication of gross product by orders of magnitude by way of speed of labor and scientific advance. You cannot stop it (when is the last time humanity has coordinated on a problem of this complexity successfully?), you can barely slow it, your ultimate goal should be change the trajectory.

By all means, make this completely clear for as much as the population as possible what is happening and what the risks are. However, that is the instrumental goal. Redirecting capital and attention to safety research and infrastructure, and away from capabilities, in the interest of diverting the trajectory is the final goal.

Gabe's avatar

I do not think the goal is only to make it clear what is happening and what the risks are.

It is also to make it clear what it would take to annihilate the extinction risks from ASI. Things like "International Agreements to immediately halt its development, stronger measures to postpone it in the near term future, and then building stronger institutions and safety research with the gained time."

Whether these measures are too expensive are then a choice that we can collectively make. There just needs to be an understanding of what are the different choices first.

---

For the record, I don't think "When is the last time Humanity has done [X]" is not a good framework to establish whether Humanity can do [X].

Most notably, it fails to predict literally all the new things that have happened or will happen, including AGI ("When's the last time AGI built a new form of intelligence that could entirely replace it?")

Will Duncan's avatar

"International Agreements to immediately halt its development, stronger measures to postpone it in the near term future"

That simply isn't practical. Climate Change, Nuclear disarmament, Loss of biodiversity, COVID. Considering precedent is useful data because it gives you priors, and that's why the fact that humanity is consistently terrible at solving the coordination problems is relevant here. The short term costs to cooperate vs defect for individual actors are astronomically out of our favor.

"Most notably, it fails to predict literally all the new things that have happened or will happen, including AGI"

Stated otherwise, "we should start from agnostic priors". That's not reasonable when we have overwhelming evidence.

Please keep doing what you're doing, it's important work. But stay plugged into what the safety community is discovering, because building intelligent infrastructure that is antifragile with respect to the blow of ASI is where we're heading.

Chris L's avatar

I found the Simple Pipeline and the claim you can just skip step 3 quite thought provoking, but honestly, the analysis related to 'The Spectre' felt quite sparse.