On the one hand, the financials are terrible for an IPO in this market.
On the other, Nvidia is worth 3trn so they can sell a pretty good dream of what success looks like to investors.
Personally I would expect them to get a valuation well about the 4bln from the 2021 round, despite the financials not coming close to justifying it.
Cerebras is well-known in the AI chip market. They make chips that are an entire wafer.
The only way for Cereberas to actually succeed in the market is to raise funds. They need better software, better developer relations, and better hardware too. It's a gamble, but if they can raise enough money then there's a chance of success, whereas if they can't it's pretty hopeless.
They have a cloud platform. I just ran a test query on their version of Llama 3.1 70B and got 566 tokens/sec.
The real winner in chip war is TSMC. Everyone is using them to make chips.
If Cerebras keeps improving it will be a decent contender to Nvidia. Nvidia VRAM-SRAM is a bottleneck. For just inference, it needs to download a model at least once per token (divided by batch size). The bottleneck is not Tensor Cores but memory transfers. They say it themselves. Cerebras fixes that (at a cost of software complexity and narrower target solution).
"the filing puts the spotlight on the company’s financials and its relationship with its biggest customer, Abu Dhabi-based G42, which is also a partner and an investor in the company."
"The documents also say that a single customer, G42, accounted for 83% of revenue in 2023 and 87% in the first half of 2024."
https://www.eetimes.com/cerebras-ipo-paperwork-sheds-light-o...
Kind of vaguely reminds me of Transmeta vs Intel/AMD back in ~2000.
Cerebras has a real technical advantage in development of wafer scale.
They use the whole wafer for a chip (wafer scale). The WSE-3 chip is optimized for sparse linear algebra ops, used 5nm TSMC process.
Their idea is to have 44 GB SRAM per chip. SRAM is _very_expensive_ compared to DRAM (about two orders of magnitude).
It's easy to design larger chip. What determines the price/performance ratio are things like
- performance per chip area.
- yield per chip area.
Wafer scale integration has been a thing since wafers. Yet, I almost never read of anyone taking it the full distance to a product. I don't know if it turns out the yield per die per wafer or the associated technology problems were the glitch, but it feels like a good idea which never quite makes it out the door.
Concerning in terms of hype bubble now having even more exposure to the stock market. Perhaps less concerning since it's a hardware startup? Nah, nvm, I think this will end up cratered within 3 years.
I’m going to go ahead and predict this flubs long term. Not only is what they are doing very challenging, I’ve had some random brokerage house reach out to me multiple times about investing in this IPO. When your IPO process resorts to cold calling I don’t think it’s a good sign. Granted I have some associations with AI startups I don’t think that had anything to do with the outreach from the firm.
I don’t know enough to say they’ll fail or be successful but I am wondering who will underwrite this IPO — they must have balls of steal and confidence gallore
Is it a good idea to go IPO when the balance sheet looks terrible?
Does cerebras make gaming GPUs, or is it enterprise-only?
How does Cerebras compare to D-Matrix?
They have zero moat
So many things here smell funny...
I have never heard of any models trained on this hardware. How does a company IPO on the basis of having the "best tech" in this industry, when all the top models are trained on other hardware.
It just doesn't add up.
This is the first I've heard of Cerebras Systems.
From the article
>Cerebras had a net loss of $66.6 million in the first six months of 2024 on $136.4 million in sales, according to the filing.
That doesn't sound very good.
What makes them think they can compete with Nvidia, and why IPO right now?
Are they trying to get government money to make chip fabs like Intel or something?
NVIDIA is pretty established but there's also Intel, AMD, Google to contend with. Sure Cerebras is unique in that they make one large chip out of the entire wafer but nothing prevents these other companies from doing the same thing. Currently they are choosing not to because of wafer economics but if they chose to, Cerebras would pretty much lose their advantage. https://www.servethehome.com/cerebras-wse-3-ai-chip-launched... 56x the size of H100 but only 8x the performance improvement isn't something I would brag about. I expected much higher performance since all processing is on one wafer. Something doesn't add up (I'm no system designer). Also, at $3.13 million per node, one could buy 100 H100s at $30k each (not including system, cooling, cluster, etc). Based on price/performance Cerebras loses IMO.