Mistral really became what all the other over-hyped EU AI start-ups / collectives (Stability, Eleuther, Aleph Alpha, Nyonic, possibly Black Forest Labs, government-funded collaborations, ...) failed to achieve, although many of them existed way before Mistral. Congrats to them, great work.
This announcement accompanies the new and proprietary Mistral Medium 3, being discussed at https://news.ycombinator.com/item?id=43915995
Not quite following. It seems to talk about features common associated with local servers but then ends with available on gcp
Is this an API point? A model enterprises deploy locally? A piece of software plus a local model?
There is so much corporate synergy speak there I can’t tell what they’re selling
While I am rooting for Mistral, having access to a diverse set of models is the killer app IMHO. Sometimes you want to code. Sometimes you want to write. Not all models are made equal.
I really love using le chat. I feel much more save giving information to them than to openai.
Why use this instead of an open source model?
I don't see any mention of hardware requirements for on prem. What GPUs? How many? Disk space?
Parsing email...
The intro video highlights searching email alongside other tools.
What email clients will this support? Are there related tools that will do this?
Interesting. Europe is really putting up a fight for once. I'm into it.
This is so fast it took me by surprise. I'm used to wait for ages until the response is finished on Gemini and ChatGPT, but this is instantaneous.
I'm curious about the ways in which they could protect their IP in this setup.
interesting take. i wonder if other LLM competitors would do the same.
the site doesn't work with dark mode, the text is dark also
Another new model ( Medium 3) of Mistral is great too. Link: https://newscvg.com/r/yGbLTWqQ
I love that "le chat" translates from French to English as "the cat".
This will make for some very good memes. And other good things, but memes included.
Mistral models though are not interesting as models. Context handling is weak, language is dry, coding mediocre; not sure why would anyone chose it over Chinese (Qwen, GLM, Deepseek) or American models (Gemma, Command A, Llama).
GPT4All has been running locally for quite a while...
Too little too late, I work in a large European investment bank and we're already using Anthropic's Claude via Gitlab Duo.
I think this is a game changer, because data privacy is a legitimate concern for many enterprise users.
Btw, you can also run Mistral locally within the Docker model runner on a Mac.