What a bad article. I mean how biased can you be, to put the first big quote from someone who wrote a book called "The AI con". Come on! This feels like the "deepseek r1 is the death of nvda" of 6 months ago. Someone is making a play, and whoever wrote this article fell for it.
gpt5 has always been about making a "collection of models" work together and not about model++. This was announced what, a year ago? And they delivered. Capabilities ~90-110% of their top tier old models at 4-6x lower price. That's insane!
gpt5-mini is insane for its price, in agentic coding. I've had great sessions with it, at 0.x$ / session. It can do things that claude 3.5/3.7 couldn't do ~6 months ago, at 10-15x the price. Whatever RL they did is working wonders.
I sure am looking forwards to what will happen to my power bill once Facebook decides to default on its share of the bill for the massive power plants Entergy is building solely to power the huge-ass data center FB's building in northern Louisiana. https://www.knoe.com/2025/08/19/entergy-power-plant-meta-dat...
After the crash, tech "industry leaders" will struggle to explain why/how they were conned into believing that intelligence was a simple database function with some probability and statistics sprinkled on top.
I just don't get why did Altman had to hype this release so much, what was the plan?
Also, what was the deal with all those mysterious Star Wars pictures?
Dot-com bubble is a good analogy.
Nvidia will be Cisco of this era. Cisco was the worlds most valuable company when dot-com bubble peaked, went down almost 90% in 2 years. There was lots of "dark fiber" all around (fiber optic cable already installed but not used).
I think OpenAI and most small AI companies go down. Microsoft, Google, Meta scale down, write down losses but keep going and don't stop research.
I hope AI bubble leaves behind massive amounts of cloud compute that companies are forced to sell at the price of the electricity and upkeep for years. New startups with new ideas can build upon it.
Investors will feel poor, crypto market will crash and so on.
I have also concluded that it is more likely a bubble than not.
Anyone looked at buying S&P sector-specific ETFs? For people who want to keep their portfolio spread as widely as possible, but are frightened by how tech-heavy the S&P index is, these seem a good option. But they all seem to have high costs (the first one I pulled up is 0.39%).
The only thing that crystalizes it that the guys in that one meeting were right... there is no moat. The author might be right but the problem will be oversupply.
A comment in a previous thread stuck with me said something like "AI is successful because nothing else interesting is happening".
That rings true and I suspect the bubble won't burst until something else comes along to steal the show.
Hum... Does anybody expect the US government to reduce the money supply or distribute it? Or for the dollar to devalue enough that their money doesn't make much of a difference anymore?
If the answer is "no" for all above, then you should expect some bubble to keep going. At most, they will change the bubble subject.
Some of us have lived through multiple bubbles and know that often, the underlying bits are useful and will gain widespread acceptance. Just play long, and don’t feed the hypemonster.
The core thesis seems valid " AI bots seem intelligent, because they’ve achieved the ability to seem coherent in their use of language. But that’s different from cognition."
As it happens LLMs work comparatively well with code. Is this because code does not refer (a lot) to the outside world and fits well to the workings of a statistical machine? In that case the LLMs output can also be be verified more easily by inspection through an expert, compiling, typechecking, linting and running. Although there might be hidden bugs that only show up later.
The author isn't exactly a thought leader in the space, or really any space for that matter. Opinion worth nothing.
Internet traffic kept growing throughout the dotcom bubble. That valuations got ahead of themselves didn't mean that there wasn't something real driving the hype.
Even if AI valuations have a sharp correction, there will still be a great need—and demand—for compute.
>perceptions of AI’s relentless march toward becoming more intelligent ... came to a screeching halt Aug. 7
Overstates things a bit. It seems unlikely OpenAI will release human level AI in the next year or two, but the march of AI improving goes on.
Also re the AI Con book saying AI is a marketing term, I'm more inclined to go with Wikipedia and "a field of research in computer science".
Though there is a bit of a dot com bubble feel to valuations.
Using how GPT-5 generates text within an image is a terrible way to test it.
If you ask it to list all 50 states or all US presidents it does it no problem. Asking it to generate the text of the answer in an image is a piss poor way of testing a language model.
I heavily dislike GPT-5 but at least have a fair review of it.
nothing ever happens
"Indeed, as long ago as the 1960s, that phenomenon was noticed by Joseph Weizenbaum, the designer of the pioneering chatbot ELIZA, which replicated the responses of a psychotherapist so convincingly that even test subjects who knew they were conversing with a machine thought it displayed emotions and empathy.
"What I had not realized," Weizenbaum wrote in 1976, "is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." Weizenbaum warned that the "reckless anthropomorphization of the computer" - that is, treating it as some sort of thinking companion - produced a "simpleminded view of intelligence.""
https://www.theguardian.com/technology/2023/jul/25/joseph-we...
Weizenbaum's 1976 book: https://news.ycombinator.com/item?id=36875958
HN commenter rates this "greatest tech book of all-time":
This article tries to argue that the AI bubble has burst by pointing to the failed release of GPT-5. Admittedly, the release of GPT-5 was somewhat of a flop, but I think it's more of a failure in its launch rather than the model itself. In fact, if you use the GPT-5 Thinking model, it's actually quite good. They attempted to make the model automatically route to different levels of thinking intensity, but the routing didn't work very well, which led to the various bad cases people experienced.
Meh.
Crashes come when there was no real business value.
I use AI all day and I’m sure I’m not the only one.
https://archive.ph/2025.08.20-113134/https://www.latimes.com...