If its anything like other UK government research bodies I've interacted with in the past, they started off with pay significantly below market rate for salaries and then failed to even hit inflation with pay raises due to UK government policies. Any fund injections are one-time capex for headlines and never any new recurrent funding for keeping the lights on, paying the staff or funding the research. The fact that these places produce any decent research is more a testament to the dedication of the staff (who have often been there since before the several decades of penny pinching which drove them into the ground) than anything the government does.
Not focusing on LLMs isn't a major sin imo, if I was donating money to an institute I'd rather they were doing something unique than churning out another also-ran llm. Researchers have diverse interests and expertise, and the field has way more depth than the current thing, it's not obviously bad they were working on other stuff.
That said, if the examples he gives of what they were working on are representative, it implies they are spending their time chasing trends instead of doing fundamental research and chased the wrong ones. I'd suspect they re doing more than what was implied though.
Glassdoor indicates that the Alan Turing Institute pays senior research associates $49,015 per year, senior project managers $48k, and senior research fellows the same. I wasn't able to identify any roles there that paid above $50k/year.
Is it any surprise that when you hire people with PhDs and you pay them less than bartenders that you don't get the best people? Those people go to Google or Microsoft and make 10x as much.
"It’s concerning that none of the projects mentioned in this document, or indeed any other major open source AI project, arose in the UK." I believe this statement is at least partially incorrect. StabilityAI is a London, UK based company, and several of the researchers directly involved in the development of Stable Diffusion are UK citizens. https://www.crunchbase.com/organization/stability-ai
And the rest of the world goes, "meh..."
This is part of a trend: the birthplace of the Industrial Revolution lost most of it's manufacturing industry, the nation that once championed free trade is cloistering itself on Brexit, the inventors of football (a.k.a. soccer) are falling behind on the World Cup, the nations that once were vassals to the British Empire are drifting away from the Commonwealth, and Scotland ...
Well, it kinda looks like Great Britain is becoming less Great each passing decade.
In World War I, four empires collapsed: czarist Russia, Ottoman Empire, Austro-Hungarian and the British Empire.
The Austrians already accepted it. The Russians and Turks are refusing desperately to accept it. The British didn't even notice it.
For the benefit of innovation it actually pays off when some organizations work on alternatives to the dominant model.
What if LLMs become stagnant and some other approach is needed, perhaps in tandem with it? Work that seems irrelevant today may become useful then.
I’m not saying that their approach is this, just that failure to produce a LLM doesn’t necessarily equal an embarrassing failure.
So this article could be summarized as "man who has running a DL company for 12 years suggested that a government institution fund DL more, and is mad that they aren't"? Disregarding the dash of woke==bad. Certainly a valid opinion, but these comments seem to be treating it as much more objective than I take it to be.
Substantively, this just seems ludicrously short-sighted. If all investment was focused on the most recent AI model to have success, DL wouldn't exist. I'm definitely saving this article for the day in the near future when people realize, hey, maybe the entire field of AI & cognitive science wasn't just wasting time for the last 70 years, and maybe those ideas will also be carried on the rising tide of Moore's Law.
Odd that the article author lists “Data as an instrument of coloniality: A panel discussion on digital and data colonialism” as an example of them doing nothing - digital colonialism is an important subject and one that many larger institutions seem afraid to cover.
Didn't the UK basically have that with DeepMind until Google bought them out? I'd suspect DeepMind would be a fantastic competitor to OpenAi today had they not sold.
Am-I the only one that finds is odd how the British government brags about Alan Turing after what they did to him? Having a government research center named after him seems particularly strange after what they had him endure.
The state forced him to undergo chemical castration because of his homosexuality. Same state kept his achievements and contribution to the war effort a secret up until after his death, so they could persecute a war hero without the public knowing about it.
Crazy to think he was convicted in 1952. Same year Elizabeth became Queen and head of state. She could have simply overturned his conviction. The man saved women, men, children, of all races and orientations from an horrible end. Had he not cracked the enigma's cryptography, there would most likely remain nothing today of the crown that persecuted him. Blown to dust by the Luftwaffe.
If only the British government had extended the same humanity to Turing himself.
Imagine how people might feel about the "departed AI train" in any lesser European (or other) country that would not even remotely dream of having an Alan Turing Institute.
It is such self-inflicted misery. The bright future for the UK would have been as a core member the EU, helping shape a large economic space with massive amounts of talent, happily moving around the wonderful cities, taping the endless cultural heritage and building a digital society congruent with the European way of life and values (which in various important ways differs from the US). While the nationalistic reflex is not as strong elsewhere in Europe it is still a hindrance that shows up in countless frictions.
In any case, LLM's are just another stop of the journey. If people stopped digging while in a hole there is always a way out.
1) Need of society 2) Necessity for survival 3) Individual determination 4) State priorities
I think one can trace back most of the inventions in human history, to be an outcome of a scenario where more than one reason above is true.
So going by this, even if we had a determined individual in the Alan Turing institute, but none of the remaining three scenarios were true, his chances of success are slim in comparison to his counterparts at other places like Canada or the US.
Look at china and you can easily find an environment where a determined individual under state priorities gets funding abd support for critical technologies.
Probably now the state priorities will change and in future you can expect some new breakthroughs happenning in the institute as well.
the criticism of the Alan Turing Insitute's past failure to become a world-leading AI research institution may be fair, I don't know if they really missed an opportunity or not.
however after this, it's a somewhat confusing post if you click the links, because after criticizing the government for not being willing to focus on LLMs, he links to a press release about 100 million pounds being devoted to training LLMs.
it's unclear to me what the objection is to this project, or what is meant by "open source" that is different from a normal publicly funded government research project.
It looks like they recognise this failure “ In 2023 that purpose remains unchanged, but we are reassessing our founding assumptions and setting a course for the next five years”.
Quite what that course is, remains to be seen.
My personal experience is that the UK loves giving out funding to established/safe organisations that it knows won’t cause a (positive or negative) splash.
I’ve seen very few UK government initiatives funding genuinely small organisations, but of those that I’ve seen, they have been well executed.
Not to be confused with the Glasgow-based Turing Institute which did do some very cool stuff back in the 1980s/1990s:
https://en.wikipedia.org/wiki/Turing_Institute
Founded by Donald Michie who worked at Bletchley Park!
I feel this isn't quite a failure of the Alan Turing or of the UK, but rather of university research vs big tech -- the latter has a significant advantage in this specific field, for a number of reasons. There was an interesting paper about that and relative discussion here: https://news.ycombinator.com/item?id=35564013
This is a bit like the CIA totally failing to spot the imminent collapse of the Soviet Union.
But they are kilometers ahead of everybody else on predicting what prices NFTs sell for, and understanding the role of coloniality in data. It's a tradeoff, really.
And the Human Brain project is also another large failure.
Europe is great at making laws but not much else.
because their real job is to preserve the knowledge, not innovate or invent new stuff.
but if academia is not the institution in charge of innovating, then which is it?
I'm still a bit confused about this. I saw it like a contradiction but do not know what to do about it.
PhD degree programs telling "come innovate" into the institutions whose real job is to preserve knowledge?
Why should they copy everyone?
so did DeepMind, excuse me, Google DeepMind.
Oh boy, jarring coming out of that dark background website
I guess this is karma at play, given what Britain did to Alan Turing ?
Does anyone else think the Turning test is a bit silly and outdated now? Like ChatGPT could probably pass it yet it is clearly not intelligent.
The sadder thing is that someone was once enamored by this institution enough to care.
Maybe this is good for the UK taxpayers and parliament to know. I don't think the rest of us - or anyone - should spend any more energy on this. Most publicly funded institution engaged in an affinity grift. News at 11. Stop funding it, the end.
Quoting Rich Sutton, who wrote the perfect response some years ago:
"The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore's law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation. These two need not run counter to each other, but in practice they tend to. Time spent on one is time not spent on the other. There are psychological commitments to investment in one approach or the other. And the human-knowledge approach tends to complicate methods in ways that make them less suited to taking advantage of general methods leveraging computation."[a]
The very smart folks at the Alan Turing Institute are learning firsthand how bitter the lesson can be.
---
[a] http://incompleteideas.net/IncIdeas/BitterLesson.html