Mistral ships Le Chat – enterprise AI assistant that can run on prem

by _lateralus_on 5/7/2025, 2:24 PMwith 158 comments

by codingbot3000on 5/7/2025, 5:41 PM

I think this is a game changer, because data privacy is a legitimate concern for many enterprise users.

Btw, you can also run Mistral locally within the Docker model runner on a Mac.

by beerneton 5/8/2025, 11:08 AM

Mistral really became what all the other over-hyped EU AI start-ups / collectives (Stability, Eleuther, Aleph Alpha, Nyonic, possibly Black Forest Labs, government-funded collaborations, ...) failed to achieve, although many of them existed way before Mistral. Congrats to them, great work.

by 85392_schoolon 5/7/2025, 5:32 PM

This announcement accompanies the new and proprietary Mistral Medium 3, being discussed at https://news.ycombinator.com/item?id=43915995

by Havocon 5/7/2025, 11:39 PM

Not quite following. It seems to talk about features common associated with local servers but then ends with available on gcp

Is this an API point? A model enterprises deploy locally? A piece of software plus a local model?

There is so much corporate synergy speak there I can’t tell what they’re selling

by _pdp_on 5/7/2025, 6:25 PM

While I am rooting for Mistral, having access to a diverse set of models is the killer app IMHO. Sometimes you want to code. Sometimes you want to write. Not all models are made equal.

by I_am_tiberiuson 5/7/2025, 7:05 PM

I really love using le chat. I feel much more save giving information to them than to openai.

by victorbjorklundon 5/7/2025, 5:45 PM

Why use this instead of an open source model?

by starik36on 5/7/2025, 9:28 PM

I don't see any mention of hardware requirements for on prem. What GPUs? How many? Disk space?

by adamsiemon 5/12/2025, 6:57 PM

Parsing email...

The intro video highlights searching email alongside other tools.

What email clients will this support? Are there related tools that will do this?

by guerrillaon 5/7/2025, 6:20 PM

Interesting. Europe is really putting up a fight for once. I'm into it.

by qwertoxon 5/8/2025, 5:56 PM

This is so fast it took me by surprise. I'm used to wait for ages until the response is finished on Gemini and ChatGPT, but this is instantaneous.

by ameliuson 5/8/2025, 8:45 AM

I'm curious about the ways in which they could protect their IP in this setup.

by badmonsteron 5/8/2025, 2:48 AM

interesting take. i wonder if other LLM competitors would do the same.

by mxmilkiibon 5/8/2025, 4:18 PM

the site doesn't work with dark mode, the text is dark also

by phupt26on 5/7/2025, 7:33 PM

Another new model ( Medium 3) of Mistral is great too. Link: https://newscvg.com/r/yGbLTWqQ

by m-hodgeson 5/7/2025, 6:59 PM

I love that "le chat" translates from French to English as "the cat".

by caseyyon 5/7/2025, 6:55 PM

This will make for some very good memes. And other good things, but memes included.

by iamnotageniuson 5/7/2025, 6:28 PM

Mistral models though are not interesting as models. Context handling is weak, language is dry, coding mediocre; not sure why would anyone chose it over Chinese (Qwen, GLM, Deepseek) or American models (Gemma, Command A, Llama).

by FuriouslyAdrifton 5/7/2025, 8:24 PM

GPT4All has been running locally for quite a while...

by curiousgalon 5/7/2025, 6:17 PM

Too little too late, I work in a large European investment bank and we're already using Anthropic's Claude via Gitlab Duo.