I'd be interested in a case against AGI now. Can you define "general intelligence" in a measurable way (even subjectively) that includes things usually considered to have general intelligence (at least humans) but doesn't include existing AI systems?
People seem to have this idea of AGI that it is an all knowing oracle of truth that is perpetually beyond the current capabilities. This is useful for convincing VCs that you need more funding, and fear mongering the government into regulating away competition. A simple and reasonable alternative conclusion is that AGI has been here for years, and that reality just isn't quite as exciting as sci-fi.
Will AGI capabilities increase? Sure, as we build out more tools for AGI to reach for, and as the intelligent agents themselves mature. Fundamentally, it is here.
I'd be interested in a case against AGI now. Can you define "general intelligence" in a measurable way (even subjectively) that includes things usually considered to have general intelligence (at least humans) but doesn't include existing AI systems?
People seem to have this idea of AGI that it is an all knowing oracle of truth that is perpetually beyond the current capabilities. This is useful for convincing VCs that you need more funding, and fear mongering the government into regulating away competition. A simple and reasonable alternative conclusion is that AGI has been here for years, and that reality just isn't quite as exciting as sci-fi.
Will AGI capabilities increase? Sure, as we build out more tools for AGI to reach for, and as the intelligent agents themselves mature. Fundamentally, it is here.