"If Anyone Builds It, Everyone Dies"

by nsoonhuion 6/2/2025, 11:54 AMwith 20 comments

by southernplaces7on 6/2/2025, 1:00 PM

The main problem here is more that Eliezer Yudkowsky is a tiresome, self-absorbed, self-promoting windbag who seems to have a penchant for saying absurdly over the top things while coating them in a fine layer of just enough technobabble to make them seem sort of plausible if you squint, all to get some attention and make some bucks.

That's fine, but it's not worth in any way taking him seriously or giving him more eyeballs.

by arcanuson 6/2/2025, 1:10 PM

> And when I reach the part where the AI, having copied itself all over the Internet and built robot factories, then invents and releases self-replicating nanotechnology that gobbles the surface of the earth in hours or days, a large part of me still screams out that there must be practical bottlenecks that haven’t been entirely accounted for here.

This is the crux of the issue. There's simply no clearly articulated doom scenarios that don't involve massive leaps in capabilities that are explained away by the 'singularity' being essentially magic. The entire approach is a doomed version of deus ex machina.

It also appears quite telling the traditional approach is focused on exotic technologies, such as nanotech, and not ICBMs. That's also magical thinking.

by pfdietzon 6/2/2025, 11:59 AM

So, under that assumption, no AI can ever be built by anyone forever, or else humanity ends.

That seems like such a dire conclusion that the optimistic take would be to just assume it's wrong and proceed, since the chance of avoiding that eventual outcome seems remote.

by darepublicon 6/2/2025, 1:04 PM

If it were that important and plausible he should release the book for free naturally.

by delichonon 6/2/2025, 12:44 PM

> And yet, even if you agree with only a quarter of what Eliezer and Nate write, you’re likely to close this book fully convinced—as I am—that governments need to shift to a more cautious approach to AI, an approach more respectful of the civilization-changing enormity of what’s being created.

For "a more cautious approach" to be effective at stopping AI progress would require an authoritarian level of computer surveillance that isn't close to politically acceptable in this country. It can only become acceptable after lots of people die. And then to be practical it probably requires ... AI to enforce. So like nuclear weapons it doesn't get banned, it gets monopolized by states. But states aren't notably more restrained at seeking power than non-states, so it still gets developed and if everyone is gonna die, we die.

I respect Scott and Eliezer but even if I agree with them on the urgency of the threat I don't see a plausible way to stop it. A bit more caution would be as effective as an umbrella in an ICBM storm.

by apion 6/2/2025, 12:05 PM

I encourage people to listen to Behind the Bastards podcast on the Zizians. It provides an approachable and entertaining picture of what you get when someone takes the core philosophical ideas of the Rationalists deeply seriously. Reductio ad absurdum can be a good start.

I want to write a takedown of this nonsense, but there are about a hundred things I want to do more. I suspect that is true of most people, including people much better qualified to write a takedown of this than me.

I am not just referring to extreme AI doomerism but to the entire philosophical edifice of Rationalism. The interesting parts are not original and the original parts are not interesting. We would hear nothing about it were it not subsidized by tech bucks. It’s kind of like how nobody would have heard of Scientology if it hadn’t gotten its hooks into Hollywood. Rationalism seems to be Silicon Valley's Scientology.

Maybe the superhuman AI will do this: Maybe it will decide to apply to each human being a standard based on their own chosen philosophical outlook. Since the Rationalists tend toward eugenicism and scientific racism, it will conclude that they should be exterminated according to the logic they advance. Each Rationalist will be subjected to an IQ test and compared to the AI and euthanized if lower.

I do wonder if there might be a bit of projection here. A bunch of people who believe raw scored intelligence according to metrics is the thing that determines the value of a living being would be nervous about the prospect of that metric being exceeded by a machine. What if the AI isn't "woke?"

It's such an onion of bullshit. You can keep peeling and peeling for a long time. If I sound snarky and a little rough here it's because I hate these people. They're at least partly responsible for sucking the brains out of a generation. But who knows maybe I'm just low IQ. Don't listen to me. I wasn't high-IQ enough to take Moldbug seriously either.