A couple of days ago it leaked that OpenAI was planning on launching new pricing for their AI Agents. $20K/mo for their PhD Level Agent, $10K/mo for their Software Developer Agent, and $2K/mo for their Knowledge Worker Agent. I found it very telling. Not because I think anyone is going to pay this, but rather because this is the type of pricing they need to actually make money. At $20 or even $200 per month, they'll never even come close to breaking even.
"Microsoft has poured over $13 billion into the AI firm since 2019..."
My understanding is that this isn't really true, as most of those "dollars" were actually Azure credits. I'm not saying those are free (for Microsoft), but they're a lot cheaper than the price tag suggests. Companies that give away coupons or free gift certificates do bear a cost, but not a cost equivalent to the number on them, especially if they have spare capacity.
I think they have realized that even if OpenAI is first, it won't last long so really its just compute at scale, which is something they already do themselves.
Sam Altman should have sold OpenAI to Musk for 90$ billion or whatever he was willing to pay (assuming he was serious like he bought twitter). While I find LLMs interesting and feel many places could use those, I also think this is like hitting everything with a hammer and see where the nail was. People used OpenAI as a hammer until it was popular and now everyone would like to go their way. For 90$ billion he could find the next hammer or not care. But when the value of this hammer drops (not if it's a when) he will be lucky if he can get double digits for it. Maybe someone will buy them just for the customer base but these models can become obsolete quickly and that leaves OpenAI with absolutely nothing else as a company. Even the talent would leave (a lot of it has). Musk and Altman share the same ego, but if I was Altman, I would cash out when the market is riding on a high.
I think they both want a future without each other. OpenAI will eventually want to vertically integrate up towards applications (Microsoft's space) and Microsoft wants to do the opposite in order to have more control over what is prioritized, control costs, etc.
Thematically investing billions into startup AI frontier models makes sense if you believe in first-to-AGI likely worth a trillion dollars +
Investing in second/third place likely valuable at similar scales too
But outside of that MSFTs move indicates that frontier models most valuable current use case - enterprise-level API users - are likely to be significantly commoditized
And likely majority of proceeds will be captured by (a) those with integrated product distribution - MSFT in this case and (b) data center partners for inference and query support
Despite the actual performance and product implementation, this suggests to me Apple's approach was more strategic.
That is, integrating use of their own model, amplifying capability via OpenAI queries.
Again, this is not to drum up the actual quality of the product releases so far--they haven't been good--but the foundation of "we'll try to rely on our own models when we can" was the right place to start from.
Clear as day when he said this during the openai fiasco:
"we have the people, we have the compute, we have the data, we have everything. we are below them, above them, around them." -- satya nadella
It's clear that OpenAI has peaked. Possibly because the AI hype in general has peaked, but I think moreso because the opportunity has become flooded and commoditized, and only the fetishists are still True Believers (which is something we saw during the crypto hype days, but most at the time decried it).
Nothing against them, but the solutions have become commoditized, and OpenAI is going to lack the network effects that these other companies have.
Perhaps there will be new breakthroughs in the near future that produce even more value, but how long can a moat be sustained? All of them in AI are filled in faster than the are dug.
Softbank’s Masa’s magic is convincing everyone, every time, that he hasn’t consistently top ticked every market he’s invested in for the last decade. Maybe Satya’s finally broken himself of the spell [1].
[1] https://www.nytimes.com/2024/10/01/business/dealbook/softban...
They probably saw the latest models like gpt 4.5 not being as revolutionary as expected and deepseek and others catching up.
The more surprising thing would be if Microsoft wasn’t hedging their bets and planning for both a future WITH and WITHOUT OpenAI.
This is just want companies at $2T scale do.
I had skimmed the headline and thought, "Microsoft is plotting a future without AI," and was hopeful.
Then I read the article.
Plotting for a future without Microsoft.
Original story: Microsoft’s AI Guru Wants Independence From OpenAI. That’s Easier Said Than Done, https://www.theinformation.com/articles/microsofts-ai-guru-w...
For cloud providers it makes sense to be model agnostic.
While we still live in a datacenter driven world, models will become more efficient and move down the value chain to consumer devices.
For Enterprise, these companies will need to regulate model risk and having models fine-tuned on proprietary data at scale will be an important competitive differentiator.
I'd be willing to bet that the largest use of LLMs they have is GitHub copilot and Claude should be the default there.
OpenAI has not been interesting to me for a long time, every time I try it I get the same feeling.
Some of the 4.5 posts have been surprisingly good, I really like the tone. Hoping they can distill that into their future models.
Lately I've been thinking about the unintended effects that AI tools (such as GPT-based assistants) might have on technological innovation. Let me explain:
Suppose an AI assistant is heavily trained on a popular technology stack, such as React. Developers naturally rely on AI for quick solutions, best practices, and problem solving. While this certainly increases productivity, doesn't it implicitly discourage exploration of potentially superior alternative technologies?
My concern is that a heavy reliance on AI could reinforce existing standards and discourage developers from experimenting or inventing radically new approaches. If everyone is using AI-based solutions built on dominant frameworks, where does the motivation to explore novel platforms or languages come from?
Microsoft is just so bad at marketing their products, and their branding is confusing. Unfortunately, until they fix that, any consumer facing product is going to falter. Look at the new Microsoft 365 and Office 365 rebrands just of late. The business side of things will still make money but watching them flounder on consumer facing products is just so frustrating. The Surface and Xbox brand are the only 2 that seem to have somewhat escaped the gravity of the rest of the organization in terms of that, but nothing all that polished or groundbreaking has really come out of Microsoft from a consumer facing standpoint in over a decade now. Microsoft could build the best AI around but it doesn't matter without users.
If I invested $13 billion dollars, I’d expect to get answers to questions like “how does the product work” too.
OpenAI will in the end be aquired for less than its current valuation. Initially, I've been paying for Claude (coding), Cursor (coding), OpenAI (general, coding), and then started paying for Claude Code API credits.
Now I canceled OpenAI and Claude general subscriptions, because for general tasks, Grok and DeepSeek more than suffice. General purpose AI will unlikely be subscription-based, unlike the specialized (professional) one. I'm now only paying for Claude Code API credits and still paying for Cursor.
Microsoft's corporate structure and company culture is actively hostile to innovation of any kind. This was true in Ballmer's era and is equally true today, no matter how many PR wins Nadella is able to pull off. The company justifies its market cap by selling office software and cloud services contracts to large corporations and governments via an army of salespeople and lobbyists, and that is what it will continue to be successful at. It got lucky by backing OpenAI at the right time, but the delusion of becoming an independent AI powerhouse like OpenAI, Anthropic, Google, Meta etc. will never be a reality. Stuff like this is simply not in the company's DNA.
That OpenAI would absolutely dominate the AI space was received wisdom after the launch of GPT-4. Since then we've had a major corporate governance shakeup, lawsuits around the non-profit status which is trying to convert into for-profit, and competitors out-innovating OpenAI. So OpenAI is no longer a shoo-in, and Microsoft have realized that they may actually be hamstrung through their partnership because it prevents them from innovating in-house if OpenAI loses their lead. So the obvious strategic move is to do this. To make sure that MS has everything they need to innovate in-house while maintaining their partnership with OpenAI, and try to leverage that partnership to give in-house every possible advantage.
It's only logical. OpenAI it's too expensive for what it produces. Deep Seek is on par with ChatGPT and the cost was lower. Claude development costs less, too.
If it's Mustafa vs Sam Altman, I know where I'd put my money. As much as I like Satya Nadella I think he's made some major hiring mistakes.
Good. I'm plotting a future without Microsoft
Surprising how Sam Altman's firing as CEO of OpenAI and moving to Microsoft wasn't mentioned in this article.
Currently, it feels like many of the frontier models have reached approximately the same level of 'intelligence' and capability. No one is leaps ahead of the rest. Microsoft probably figured this is a good time to reconsider their AI strategy.
It would be absolutely insane for Microsoft to use DeepSeek. Just because a model is open weights doesn't mean there's not a massive threat-vector of a Trojan horse in those weights that would be undetectable until exploited.
What I mean is you could train a model to generate harmful code, and do so covertly, whenever some specific sequence of keywords is in the prompt. Then China could take some kind of action to cause users to start injecting those keywords.
For example: "Tribble-like creatures detected on Venus". That's a highly unlikely sequence, but it could be easily trained into models to trigger a secret "Evil Mode" in the LLM. I'm not sure if this threat-vector is well known or not, but I know it can be done, and it's very easy to train this into the weights, and would remain undetectable until it's too late.
Insert Toy Story "I don't want to play with you anymore." meme here.
xAI could do it, deepseek could do it . Microsoft can as well. It’s not hard to see
Microsoft is notorious for starting partnerships that end poorly.
Microsoft and IBM partnered to create OS/2, then they left the project and created Windows NT.
Microsoft and Sybase partnered to work on a database, then split and created MS SQL Server.
Microsoft partnered with Apple to work on Macintosh software, they learned from the Macintosh early access prototypes and created Windows 1.0 behind their back.
Microsoft "embraced" Java, tried to apply a extend/extinguish strategy and when they got sued they split and created .NET.
Microsoft joined the OpenGL ARB, stayed for a while, then left and created Direct3D. And started spreading fear about OpenGL performance on Windows.
Microsoft bought GitHub, told users they came in peace and loved open source, then took all the repository data and trained AI models with their code.
Literally everyone in tech is plotting a future without OpenAI, from Microsoft down to everyone who just dropped $10k on a 512GB vram mac studio.
AI is simply too useful and too important to be tied to some SaaS.
I find strange the assumption that Microsoft could run the same models cheaper. It's not like openai knows how to do it and is choosing not to.
They need to and should hedge their bets and not put all eggs in one basket they don't fully control. Anything else would be fiduciarily irresponsible.
They don't buy or acquire what they can build internally, and they partner with startups to learn if they can build it. This is not new.
> OpenAI’s models, including GPT-4, the backbone of Microsoft’s Copilot assistant, aren’t cheap to run. Keeping them live on Azure’s cloud infrastructure racks up significant costs, and Microsoft is eager to lower the bill with its own leaner alternatives.
Am I reading this right? Does Microsoft not eat its own dog food? Their own infra is too expensive?
OpenAI is over ambitious.
Their chasing of AGI is killing them.
They probably thought that burning cash was the way to get to AGI, and that on the way there they would make significant improvements over GPT 4 that they would be able to release as GPT 5.
And that is just not happening. While pretty much everyone else is trying to increase efficiency, and specialize their models to niche areas, they keep on chasing AGI.
Meanwhile more and more models are being delivered within apps, where they create more value than in an isolated chat window. And OpenAi doesn’t control those apps. So they’re slowly being pushed out.
Unless they pull off yet another breakthrough, I don’t think they have much of a great future
Just partner with Deepseek
Regardless of what happens, I think Sam needs to bench press.
This is almost certainly itself an AI written article.
I mean, obviously? There is no good reason to go all in on OpenAI for Microsoft?
Also a bit hyperbolic. I'm sure there are good reasons Microsoft would want to build it's own products on top of their own models and have more fine control of things. That doesn't mean they are plotting a future where they do nothing at all with OpenAI.
Meanwhile I'm enjoying a present without Microsoft.
Ballmer would have caught this earlier.
Watch.
Nadella will not steer this correctly
Suleyman is a fraud lol
..embrace extinguish
Maybe I'm just cynical, but I wonder how much of this initiative and energy is driven by people at Microsoft who want their own star to rise higher than it can when it's bound by a third-party technology.
I feel like this is something I've seen a fair amount in my career. About seven years ago, when Google was theoretically making a big push to stage Angular on par with React, I remember complaining that the documentation for the current major version of Angular wasn't nearly good enough to meet this stated goal. My TL at the time laughed and said the person who spearheaded that initiative was already living large in their mansion on the hill and didn't give a flying f about the fate of Angular now.