Finally, the highly reputable science publication LA Times provides proof that LLMs are in fact large language models, rather than large math solvers or large fact models.
I wonder if AI being really useful when it comes to programming caused some to miscalculate its usefulness in general.
LLMs turn traditional computing upside down.
Instead of very accurate results at low cost, they produce inaccurate results at high cost.
Generalized intelligence and reasoning are not achievable by brute force statistical simulation --- regardless of the amount of money and hope invested/wasted.