You shouldn't build your career around existential risk

by vrnvuon 12/25/2024, 11:21 AMwith 2 comments

by Frickenon 12/25/2024, 11:43 AM

>I mean, what happens to Eliezer Yudkowsky's -- the biggest advocate of stopping all AI research due to AI existential risk -- career if it turns out that AI risk is simply not an existential concern?

Either AGI arrives and kills us all, or it arrives and automates all our jobs, or it doesn't arrive and Yudowsky can keep doing his career. Am I missing something?

by vouaobrasilon 12/25/2024, 12:14 PM

> I mean, what happens to Eliezer Yudkowsky's -- the biggest advocate of stopping all AI research due to AI existential risk -- career if it turns out that AI risk is simply not an existential concern? Would anyone care about him at all?

I think the post misses the fact that it's not "off" or "on": even if AI is not a literal existential risk, it is still an immense risk so working to stop it is still a worthy activity that can have many positive results for society.