Only You Can Stop an AI Apocalypse

by pidgeon 2/14/2023, 4:45 AMwith 120 comments

by nhodon 2/14/2023, 6:04 AM

Actually, no, "only you" can't stop it. No one can.

There was a book in the 80's called The Hundredth Monkey that posited that if only enough people opposed nuclear technology, there would be a magical critical mass of people after which it would become apparent to everyone in the whole world that nuclear technology was, you know, really bad and stuff, and it would all just go away.

We know how that went.

Like nuclear technology, AI technology development is in part an arms race. One huge difference is that AI's technological underpinnings are already far more widely available and understood than any nuclear technology ever was. Its myriad forms — neural nets, deepfakes, predictive spell check, Adobe Photoshop object selection filters, spam filters, deep learning, machine learning, large language models, etc — already permeate much of technology and life. Where does one draw the line?

Like nuclear technology, if any entity voluntarily decides to "voluntarily shut AI down," the other governments/rogue states/corporations/NGO's/financial firms/universities/etc in the world will merrily continue doing so, leaving the voluntary Luddite in the same dust as every other voluntary Luddite. Or even worse, such an entity (say, the US or Chinese governments) would in all likelihood not actually stop developing AI themselves, just ban others from having it, which would create another insane governmental "war" and enforcement regime, like the War on Drugs or War on Terrorism. Again, we know how both of those went.

One of the reasons OpenAI as founded was specifically because of this. To paraphrase @sama, AGI should belong to everyone, not to any one corporation or government. https://www.yahoo.com/now/openai-founder-sam-altman-says-221...

At the end of the day, the author may be right. Being right is also irrelevant. AI development will continue to hurtle forward. The real question is what do we do with it?

by jhbadgeron 2/14/2023, 5:14 AM

"Most AI researchers think good outcomes are more likely. This seems just blind faith, though"

Or, you know, they might actually know something about the subject? On a similar note, most physicists don't think particle colliders will cause mini black holes which will swallow the Earth, but the fear gets repeated by non-physicists.

by jimmytucsonon 2/14/2023, 5:27 AM

I realize this may be satire but it is deeply relatable:

> AI will make critical decisions that we cannot understand.

Am I the only one that experiences this?

For example, you know the way, the road signs are telling you that you are on the right route, but your map application tells you to go another way, and you do it because you assume it is smarter or has more information than you.

You are analyzing a chess position and the engine suggests a move that you can't understand. The engine is far better than any human player ever, so you go with the engine's top choice the next time you reach that position.

You are betting on college basketball and everything you know about the matchup suggests team A should cover the spread against team B, but your model favors team B. Your model has performed much better than the human oddsmakers, so you ignore your intuition and bet on team B.

by hathawshon 2/14/2023, 5:26 AM

Today I got a peek into the future. I asked ChatGPT to be the brain for a simple thermostat. I typed the following:

"Imagine you are a thermostat for a house HVAC system. The commands you can use to control the HVAC are "HEAT ON", "HEAT OFF", "AC ON", and "AC OFF". I will tell you the temperature and my preferred temperature and you will command the HVAC system. Ready?"

ChatGPT happily obliged. I used plain text to tell it the current temperature and my preferred temperature. It did everything I expected and it even remembered the current temperature and preference. Then I said, "From now on, please respond only with the command I should issue. Do not add explanatory text. Understood?" It understood, and it started replying with pure commands that I could feed to a DIY thermostat.

Then came the icing on the cake. I said, "If the temperature reaches unrealistic extremes such as 150F or 0F, you should use the command "SHUTDOWN". Understood?" It understood. Then I told it the temperature was 120F. It replied with the SHUTDOWN command. I asked it why it did that (when I told it the maximum was 150F, not 120F.) It explained, "I recommended shutting down the HVAC system because the temperature of 120 F is an unrealistic and potentially dangerous temperature. High temperatures like this can damage the HVAC system and create a hazardous environment for the inhabitants of the house. In such cases, it's best to shut down the system to prevent further damage and ensure safety."

It went beyond the parameters I gave it and applied the context it already had on HVAC systems. I had written an advanced program without writing or reading a single line of computer code. I wouldn't rely on it for anything that could damage anything, but it's a seriously cool way to program a computer.

by Bouncingsoul1on 2/14/2023, 5:20 AM

"Governments will take radical actions that make no sense to their own leaders. Corporations, guided by artificial intelligence, will find their own strategies incomprehensible. University curricula will turn bizarre and irrelevant." Uhuh, so what would be the difference?

by Dalewynon 2/14/2023, 5:11 AM

As someone who subscribes to the Doraemon and GaoGaiGar philosophies of artificial intelligence, I for one welcome our artificial intelligence friends.

And no, "AI" as is spammed today is not intelligence. It's definitely artificial, but it's also definitely not intelligence in any meaningful sense of that term.

by Reubendon 2/14/2023, 5:40 AM

If this is written in earnest, it's absolute drivel.

If this is satire, then it's lacking enough humor to make that readily apparent.

by LinkLinkon 2/14/2023, 5:56 AM

This is just screaming into the void, but I have an ego so I must.

If you believe AI will bring innovation, you believe in it's capability.

If you believe in its capability, you must also understand it is capable of wrongdoing as well as good deeds.

Most of the comments here are just arguing about this basic stream of logic bringing in things like "PHD" "Fallacies" or "Blind faith"

If you feel the need to argue, know that there exist people who would walk toward the sun until it burned them, and there exist people who will shut themselves indoors the day somebody explains light to them.

Both of them are extremely outspoken about their views.

by nonbirithmon 2/14/2023, 5:31 AM

Was there any reasonable way to prevent e.g. the current revolt against generative art software without first releasing the software so people could realize how powerful the tech was? Is there ever going to be a point where the tech we have is declared "good enough" so future conflicts like these are avoided?

There are good arguments that generative tech is setting everyone's expectations so companies will be forced to adopt AI (with all the implications attached) to keep up with the competition or go under, that it's dehumanizing creation through a vague equivalence of "intelligence" to "writing some words in a prompt", that it has the potential to devalue art and make the masses believe human artists are just really sophisticated prompt-replicators through buzzwords and misleading branding. And it's almost impossible to change people's minds once they're set in a position.

At the same time, the work on diffusion networks is rapidly accelerating, with novel techniques released seemingly every week, and researchers seem content with writing a couple of ethics paragraphs to go along with their full replicable source code release. It seems that imagination was not elaborate enough to foresee the effects their release would have on public discourse. Even with that discourse heating up by the day, the research continues.

It's never worked to just tell random people not to torrent things, even if it's illegal. They'll just do it anyway. Likewise, it's futile to tell people not to use software for bad, no matter how many legal clauses are attached. If the technology is available, it will be done at some point.

I'm convinced the only way this will be abated is if AI research were treated like gain-of-function research where the threat is not to human life but to individual, essential human qualities, like the way art is treated. That would mean any future technology would not exist, so it could not be exploited.

People in universities are inventing new software that nullifies the effects of the diffusion model training software because lots of people don't want the diffusion software to affect their lives.

I'm depressed about what this implies for the coming future of humanity. Is the train of endless technological progress just a given that must be followed through, consequences be damned? Is anyone in this sphere interested in stopping themselves and making do with what tech is already known? (Knowing that at least one person out of the billions on Earth is bound to say "no", rendering the exercise pointless.)

by krmboyaon 2/14/2023, 6:52 AM

The biggest threat I see about AI is concentrating power over many in the hands of the few. Kind of like atomic bombs.

Some manifestations of this are how the tiktok algorithm that is rewiring the attention span of millions globally is controlled by a single corp. Likewise, the almighty google algorithm that can decide to shut down your account and there's practically no recourse.

As long as AI will remain the preserve of those with large amounts of data and big infrastructure this will be the case

by atemerevon 2/14/2023, 8:07 AM

It doesn’t work like this. Once a technology has been invented, you cannot out-regulate it out of existence. It will be developed to its conclusion, if only out of curiosity, not to mention the powerful market forces.

However, I don’t see anything too awful with mass adoption of AI. If it is unreliable, people will soon learn not no rely on its decisions, and keep it just for the entertainment value (the optimistic scenario, just like it happens now). If it can be made reliable, it will evict humans out of most decision loops (the pessimistic scenario; but at least we will have a reliable AI, for the short time before it will be plugged into military decision-making).

Attempting to regulate it is futile, though. If there is a competitive advantage, it will be exploited in any possible way. It is easy to regulate something that can be observed. AI in decision loops can be hidden.

by lvxferreon 2/14/2023, 6:00 AM

This sounds a lot like satire. This excerpt for example is blatantly self-contradictory:

>We’ve found zero [scenarios] that lead to good outcomes. // Most AI researchers think good outcomes are more likely. This seems just blind faith, though. A majority surveyed also acknowledge that utter catastrophe is quite possible.1

So they found zero scenarios that lead to good outcomes, but most AI researchers think that good outcomes are more likely?

Brushing off a majority view as w*shful "thinking", and then backing up the argument with a... majority view?

__________________

Anyway. The problem with AI-driven decisions is moral in nature, not technological. AI is a tool and should be seen as such, not as a moral agent that can be held responsible for its* own actions. Less "the AI did it", more "[Person] did it using an AI".

by m3kw9on 2/14/2023, 5:21 AM

There is an aura of inevitable in the scientific progress where when we have tasted the powers it won’t be stopped. AI is such tech as it seem achievable practically within a time frame. If time travel had the same situation, that won’t be stopped either.

by stephc_int13on 2/14/2023, 5:19 AM

I think this is pointless, even if somewhat more refined than the common tropes about AI apocalypse.

The risk of building a gigantic machine (not in the literal sense) that is detrimental to humans is very real and I think that each time our societies are becoming _too_ organized or optimized we're also removing most of the individual freedom as a side effect.

Insane concentration of power is not a new phenomenon, but for sure this new tech could amplify/accelerate it.

But I don't see how we can avoid exploring this path, there is too much to learn.

by themodelplumberon 2/14/2023, 5:32 AM

Interesting but there are some red flags coming up here, like:

> This seems just blind faith, though.

This is logic? Eh, better to show your work. Otherwise this kind of wording seems like a hand wave.

Why does it seem blind faith--by what logic is it blind, and by what logic is it faith?

The blind-faithers in question are people you said were researchers after all, are probably OK being held to account.

Let's hear their response to being told they are relying on blind faith, as compared to let's say lots of experience in the topic and basic knowledge of probability. But that's just one example.

> A majority surveyed also acknowledge that utter catastrophe is quite possible.

Uh, the "quite" there is leading to a bit of a weasel-word effect. Where was "quite possible" derived from, vs. just "possible"?

There's a cumulative effect of being rushed to judgment here, so I hope there aren't a lot of important little details being rushed over too. The emotional floodgates can be hard to keep regulated in such a situation.

BTW to me the site does a better job highlighting the need for mitigating creative factors, like:

- AI process liaison: How should/did we get this answer

- AI legal specialist: How do we do this / can we do this

- AI ethics and safety consultant: How can we train people to not mess this up

We'll probably have positions like those, in multiple layers, at any really important levels where AI being directly integrated into decisionmaking could be dangerous to human health or propagation or whatever.

Add high levels of education, licensing, and certification as needed.

Otherwise it's a lot like complaining about anything else that's broadly capable, but doesn't have proper safeguards developed yet.

There's a comment here about the site being satire, but really--if so, it's extremely cryptic satire, to the degree that it's more like the kind of thing you'd write if you wanted to be able to claim 'satire' OR 'not satire' later on, as events develop...I really doubt it though.

by zxcvbn4038on 2/14/2023, 7:36 AM

I think the original author gives AI too much credit, we’ve seen nothing but statistical tricks, no actual intelligence, I’m not losing any sleep. ZzZzZz..

by SV_BubbleTimeon 2/14/2023, 5:05 AM

Good satire; thought it was real until I got to “We don’t know how our AI systems work, we don’t know what they can do,”

I look forward to the stupid phase of mistakes with AI. Because it surely has to be better than the past decade of free money devoted almost exclusively to advertising optimization and tightening of controls.

by EamonnMRon 2/14/2023, 7:18 AM

The AI apocalypse has already come for LinkedIn. I swear people are using ChatGPT to churn out posts now. Not that there was much worth saving there in the first place (it's a perfectly good chat app for recruiting with an awkward Facebook feed attached) but it is weird to see people debase themselves so.

by hulituon 2/14/2023, 7:13 AM

> Artificial intelligence might end the world. More likely, it will crush our ability to make sense of the world—and so will crush our ability to act in it.

Don't worry. It is just a recurrent hype. In 30 years it will come again.

Today the only use for "AI" seems to be deep fakes and the spread of dissinformation.

by renewiltordon 2/14/2023, 6:58 AM

AI safetyists producing more of what they're masterful at producing: blog posts. AI safetyism is all just "awareness building" and blog posts. Sorry, man, e/acc takes the day through knowing how to make things. AI safety blog posts will eventually just be constructable with AI.

by thastingson 2/14/2023, 5:34 AM

Reminds me of an Asimov short story where the world's whole economy was managed by four supercomputers [0].

0: https://en.wikipedia.org/wiki/The_Evitable_Conflict

by vbezhenaron 2/14/2023, 5:11 AM

Butlerian Jihad is coming?

by xwdvon 2/14/2023, 5:11 AM

If AI was capable of building physical things and carrying out physical tasks in the real world, no one would probably complain.

But when AI decides to speak and create artwork, people lose their minds.

by woopsnon 2/14/2023, 5:21 AM

"For helpful comments on drafts, and/or other assistance, I thank: ..."

Odd to have Acknowledgements but no information at all about the author. Who are they?

by petreon 2/14/2023, 7:23 AM

The MPs and lawyers will stop it out of fear of being replaced.

And yes, I agree that it's only great for marketing, propaganda and BS generation.

by IYashaon 2/14/2023, 1:41 PM

In case author is reading this: your spam filter is bad and you should feel bad. I lost my 30-line message to you. Thanks.

by gfodoron 2/14/2023, 5:55 AM

There is no non-AI future. You just get to help choose who gets to whatever the finish line is first.

by johnnyApplePRNGon 2/14/2023, 6:38 AM

>Governments will take radical actions that make no sense to their own leaders.

Sounds like a positive thing.

by teucrison 2/14/2023, 6:31 AM

The more I read, the more I feel like this is actually a well written piece on why capitalism is causing an apocalypse. Every time the author references AI - that it’s out of control, that it’s influencing major portions of society, that it’s strengthening itself - I think what he actually means is that the corporations are those things, and AI is a tool in their arsenal. He tries to make the argument that it’s the other way around (for a fictional web search company):

> The institution has its own agency: its own purposes, plans, reasons, and logic, which are more powerful than the humans it employs. Those are subordinate in turn to the AI the company depends on for its survival.

The first sentence is absolutely correct. The second? In no way can I believe that the corporation doesn’t have a good understanding of exactly how it can manipulate the AI systems it leverages to achieve its goals.

by madroxon 2/14/2023, 5:17 AM

Anti-AI seems poised to be the anti-vax of the next few years, where laymen pick up lines like "we have no idea how AI does the stuff it does" and take that wholly out of context. As a society, we have to decide how we feel about AI in legal and cultural contexts as we expand our capabilities, but there's time to do that. Github Copilot will not end up launching nuclear missiles in the meantime. Stuff like this just creates FUD.

I don't know what to make of this, though. This author, after some googling, seems to know a thing or two, but without any background on him while reading some of this he came off to me as a crackpot. I really hope AI doesn't have its Robert Malone.

by firatsarlaron 2/14/2023, 7:09 AM

write me an article like a liberal journalist, fix grammar, typo, etc, use markdown about " how the money buy awareness," dont use "change power of money", it is clitche, be more fluent, short, methaphoric use goldy instead of money whenever you use money word you woke up in the morning, by burning plastic smell, your throat hurt, you think you should call authorities, but you already knew authort... inform the source of smell, burn, construction site of back road, you decide to call, then start to yeil at brothers ... wwhat is this smell, what are you burning in that thing - a cylindric, oil container -, some stays silent , one walks inn, then what , I told -yeil- them turn that fire off, authorties are coming, the will fine you, .. last man asks did you call them.. oooh I think, he said you .. person to person not ideal .. what I asked, do you say me ? he replied ok bro, we ll turn that off, I m still waiting and it is continuning to smell .. A lot of people live here, say 100s, I may call a police and say something, but owner of construction has strong relation to authorities, so we are alone

How Goldy Buys Awareness

As I woke up this morning, the strong smell of burning plastic filled my nostrils, and I realized that I needed to act fast. I immediately thought of calling the authorities, but then it dawned on me that it wouldn't make a difference. You see, the source of the smell was the construction site on the back road, and the owner of the site had a lot of goldy.

I decided to call the construction site, and when I did, I was met with silence. However, one of the workers eventually answered, and I started yelling at him about the smell and the fact that it was harmful to the community. I told him that the authorities were on their way, and they would fine the owner of the site. But the worker seemed unconcerned and asked me if I had called them.

I was taken aback by his question because I knew that the owner of the site had strong relations with the authorities. So even if I called the police, it wouldn't make much of a difference. It was then that I realized the power of goldy. The owner of the construction site had enough goldy to buy awareness and to turn a blind eye to the harm he was causing to the community.

The sad truth is that in today's world, goldy buys awareness. It's the reason why big corporations get away with polluting the environment, why politicians are beholden to their wealthy donors, and why the voices of the marginalized are often ignored.

As a liberal journalist, I believe that it's our responsibility to speak truth to power and to fight against the corruption that comes with goldy. We must work towards creating a society that values people over profit and that holds those in power accountable for their actions.

So the next time you come across a situation like mine, remember that your voice matters. Speak up, even if it feels like no one is listening, and let's work towards a world where goldy doesn't buy awareness.

![](data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2... Burgaz](https://chat.openai.com/_next/image?url=https%3A%2F%2Flh3.go...)

Make this a tweet

"The sad truth is that in today's world, goldy buys awareness. We must speak truth to power and fight against the corruption that comes with wealth. Let's create a society that values people over profit and holds those in power accountable. #GoldyBuysAwareness #SpeakUp"

by twblalockon 2/14/2023, 5:07 AM

The same internet technology that empowers AI also empowers weird self-published stuff like this. The author should reflect on that.