We've Been Conned: The Truth about Big LLM

by midzeron 4/26/2025, 12:15 PMwith 2 comments

by joegibbson 4/26/2025, 12:44 PM

It could be $98/hour but you're splitting that up between multiple users. You don't run the instance entirely for an hour, you run it for a few seconds 20-50 times in the hour. If you had Claude spitting out tokens for an hour straight you'd run up a crazy bill.

It would be uneconomical to run Llama 3 14B on a bunch of A100s unless you're actually going to be using all that throughput. You can run Llama 3 8B locally no problem at all on regular consumer hardware with good speeds.

by Hackbratenon 4/26/2025, 6:18 PM

I know it's not the point of the article, but anyway: why does the author even allow their IDE to suggest them auto-completion while editing natural language text?

If they hate it so much, why don't they turn it off once and for all?