The Case for AGI by 2030

by doitLPon 5/19/2025, 12:03 AMwith 5 comments

by Zambyteon 5/19/2025, 1:21 AM

I'd be interested in a case against AGI now. Can you define "general intelligence" in a measurable way (even subjectively) that includes things usually considered to have general intelligence (at least humans) but doesn't include existing AI systems?

People seem to have this idea of AGI that it is an all knowing oracle of truth that is perpetually beyond the current capabilities. This is useful for convincing VCs that you need more funding, and fear mongering the government into regulating away competition. A simple and reasonable alternative conclusion is that AGI has been here for years, and that reality just isn't quite as exciting as sci-fi.

Will AGI capabilities increase? Sure, as we build out more tools for AGI to reach for, and as the intelligent agents themselves mature. Fundamentally, it is here.

by Lockalon 5/19/2025, 2:50 AM

Ah, "machines will be capable, within twenty years, of doing any work a man can do" - 1965

by andsoitison 5/19/2025, 12:13 AM

> “we are now confident we know how to build AGI”

Uhm. If you knew how to build AGI, what is your logical next step? Is this step in the interest of humanity?