There have been two criticisms of this paper floating around.
1. The test mechanism is to use prediction of sinusoidal series. While it's certainly possible to train transformers on mathematical functions, it's not clear why findings from a model trained on sinusoidal functions would generalize into the domain of written human language (which is ironic, given the paper's topic).
2. Even if it were true that these models don't generalize beyond their training, large LLMs' training corpus is basically all of written human knowledge. So then the goalpost has been moved to "well, they won't push the frontier of human knowledge forward," which seems to be a much diminished claim, since the vast majority of humans are also not pushing the frontier of human knowledge forward and instead use existing human knowledge to accomplish their daily goals.
My uneducated opinion is that this paper is bullocks. Maybe they are looking at deeper mathemtical results, instead of every day tasks.
But every single day I am using OpenAI GPT4 to handle novel tasks. I am working on a traditional saas vertical, except with a pure chatbot. The model works, is able to understand which function to call, to extract which parameters, and to know when the inputs will not work. Sure, if you ask it to do some extraneous task, it fails.
Google/Deep Mind need to start showing up with some working results.
Where. are. the. models. google.
We humans don't even know when we are doing real extrapolation, and the vast majority of humans are interpolating. I bet many do nothing but interpolate their whole lives.
So - and I say this as someone who writes NLP papers too - who cares?
The one thing is that they seem to be using relatively small models. This may be a really damning result but I was under the impression that any generalization capabilities of LLMs appear in a non-linear fashion when you increase the parameter count to the tens of billions/trillions as in GPT4. It would be interesting if they could recreate the same experiment with a much larger model. Unfortunately I dont think thats likely to happen because of the resources required to train such models and the anti-open-source hysteria likely preventing larger models from being made publicly available much less the data they were trained on. Imagine that, stifling research and fearmongering reduces the usefulness of the science that does manage to get done.
Current AI models are approximation functions with huge number of parameters. These approximation functions are reasonably good at interpolation, meh at extrapolation, and have nothing to do with generalization.
"Generalization" is always a data problem.
If you trained it on one function class, of course that's all it learned to do. That's all it ever saw!
If you want to learn arbitrary function classes to some degree, the solution is simple. Train it on many different function classes.
Untrained models are as blank slate as you could possibly imagine. They're not even comparable to new born humans with millions of years of evolution baked in. The data you feed them is their world. Their only world.
FWIW the paper title is focuses on quite a different conclusion than the submission title: Pretraining Data Mixtures Enable Narrow Model Selection Capabilities in Transformer Models
Why are transformer models so bad at math? They often fail at simple addition.
Overfitting much?
TLDR: transformer models (on gpt2 scale) are great (near-optimal) at interpolating between the cases given in (pre-)training, but as soon as we leave the training domain fail at extrapolation. Impressive results may be more due to the wide breadth of (pre-)training data, and less due to generalization ability.
I'm not sure folks who're putting out strong takes based on this have read this paper.
This paper uses GPT-2 transformer scale, on sinusoidal data:
>We trained a decoder-only Transformer [7] model of GPT-2 scale implemented in the Jax based machine learning framework, Pax4 with 12 layers, 8 attention heads, and a 256-dimensional embedding space (9.5M parameters) as our base configuration [4].
> Building on previous work, we investigate this question in a controlled setting, where we study transformer models trained on sequences of (x,f(x)) pairs rather than natural language.
Nowhere near definitive or conclusive.
Not sure why this is news outside of the Twitter-techno-pseudo-academic-influencer bubble.