This won't make a dent in the logical armor of AI optimists:
[ ] If you are not intimately familiar with the development of AI, your warnings on safety can be disregarded due to your basic ignorance about the development of AI
[x] If you are intimately familiar with the development of AI, your warnings on safety can be disregarded due to potential conflicts of interest and koolaid drinking
Unbridled optimism lives another day!
"sailing against the wind" is a very apt description of hardlining Yuddite philosophy when your company's models got maybe 20% better in the past two years (original GPT4 is still the best model I've dealt with to this day), while local models got 1000% better.
we should all thank G-d these people weren't around during the advent of personal computing and the internet - we'd have word filters in our fucking text processors and publishing something on the internet would require written permission from your local DEI commissar.
arrogance, pure fucking hubris brought about by the incomprehensibly stupid assumption that they will get to be the stewards of this technology.
Thank you Jan for your work and your courage to act and speak candidly about these important topics.
Open question to HN: to your knowledge/experience which AGI-building companies or projects have a culture most closely aligned with keeping safety, security, privacy, etc. as high a priority as āwinning the raceā on this new frontier land-grab? Iād love to find and support those teams over the teams that spend more time focused on getting investment and marketshare.
X.com links are currently broken on HN so I posted a screenshot instead
> āOver the past few months [ā¦] we were struggling for compute
OpenAI literally said they were setting aside 20% of compute to ensure alignment [1] but if you read the fine print, what they said was that they are ādedicating 20% of the compute weāve secured āto dateā to this effortā (emphasis mine). So if their overall compute has increased by 10x then that 20% is suddenly 2%, right? Is OpenAI going to be responsible or is it just a mad race (modelled from the top) to āwinā the AI game?
[1] https://openai.com/index/introducing-superalignment/?utm_sou...
Seems very arrogant at the end to say humanity is waiting on you. Itās not like Covidā19 where the population was literally looking to scientist/gov for an active solution. AGI is something a select set of computer nerds want, but the public isnāt clamoring for. In fact the entire idea of safety/alignment/cultural shift shows the potential downside on society if it actually were achieved.
Meanwhile deepmind with alpha fold etc is showing AI can help with pressing problems of humanity without AGI as the necessary first step.
So... this is relevant if you think AGI superintelligence is a thing, and not so clearly relevant otherwise?
Seems to me the management isnāt quite as high on their own supply (at least privately) and they do not believe anything even remotely resembling āAGIā is possible in the foreseeable future. Which I, and many other people in this field, do agree with.
> "Learn to feel the AGI."
This is flabbergasted statement to me but is probably the necessary attitude to push the AI/ML frontier, I guess.
I feel old.
It really baffles me that even extremely knowledgeable insiders like this end up buying the AGI hype. LLMs are extremely novel and have lots of groundbreaking applications, but this idea that we are headed directly into fantasy sci-fi A.I. is completely bonkers.
It's one thing to push this kind of hype to get people talking about A.I. if you're trying to capitalize on this space. It's something else entirely to swallow your own marketing BS as if it were gospel.
But let's face it, this guy probably isn't serious, he's just spewing more hype upon departing OpenAI looking for the next tech company to hire him.
I was and am very skeptical of the "superalignment" plan (which was "build an AI that does alignment research for us, and then ask it what to do"). But it's a bad look for OpenAI if they pledged 20% of their compute for this effort and then just didn't actually give them the compute.
He's convinced that AGI is an eventuality.
His call for preparation makes it sound like it's near.
The only sane person in AI is LeCun.
It is excellent that he is speaking out.
I don't believe we can create something more intelligent than us. Smarter, yes, but not more intelligent. We will always have a deeper and more nuanced understanding of reality than our artificial counterparts. But it does not mean that AI can not be a danger.
"Learn to feel the AGI"
shades of
> Anticipating the arrival of this all-powerful technology, Sutskever began to behave like a spiritual leader, three employees who worked with him told us. His constant, enthusiastic refrain was āfeel the AGI,ā a reference to the idea that the company was on the cusp of its ultimate goal. At OpenAIās 2022 holiday party, held at the California Academy of Sciences, Sutskever led employees in a chant: āFeel the AGI! Feel the AGI!ā The phrase itself was popular enough that OpenAI employees created a special āFeel the AGIā reaction emoji in Slack.
From the Atlantic article. He seems a serious person but it's hard to take this circle seriously.