Confer – End to end encrypted AI chat

by vednigon 1/13/2026, 1:45 PMwith 172 comments

Signal creator Moxie Marlinspike wants to do for AI what he did for messaging - https://arstechnica.com/security/2026/01/signal-creator-moxi...

Private Inference: https://confer.to/blog/2026/01/private-inference/

by shawnzon 1/13/2026, 7:18 PM

I don't agree that this is end to end encrypted. For example, a compromise of the TEE would mean your data is exposed. In a truly end to end encrypted system, I wouldn't expect a server side compromise to be able to expose my data.

This is similar to the weasely language Google is now using with the Magic Cue feature ever since Android 16 QPR 1. When it launched, it was local only -- now it's local and in the cloud "with attestation". I don't like this trend and I don't think I'll be using such products

by azmenakon 1/13/2026, 9:56 PM

As someone who has spent a good time of time working on trusted compute (in the crypto domain) I'll say this is generally pretty well thought out, doesn't get us to an entirely 0-trust e2e solution, but is still very good.

Inevitably, the TEE hardware vendor must be trusted. I don't think this is a bad assumption in today's world, but this is still a fairly new domain and longer term it becomes increasingly likely TEE compromises like design flaws, microcode bugs, key compromises, etc. are discovered (if they haven't already been!) Then we'd need to consider how Confer would handle these and what sort of "break glass" protocols are in place.

This also requires a non-trivial amount of client side coordination and guards against any supply chain attacks. Setting aside the details of how this is done, even with a transparency log, the client must trust something about “who is allowed to publish acceptable releases”. If the client trusts “anything in the log,” an attacker could publish their own signed artifacts, So the client must effectively trust a specific publisher identity/key, plus the log’s append-only/auditable property to prevent silent targeted swaps.

The net result is a need to trust Confer's identity and published releases, at least in the short term as 3rd party auditors could flag any issues in reproducible builds. As I see it, the game theory would suggest Confer remains honest, Moxie's reputation plays are fairly large role in this.

by datadrivenangelon 1/13/2026, 3:44 PM

Get a fun error message on debian 13 with firefox v140:

"This application requires passkey with PRF extension support for secure encryption key storage. Your browser or device doesn't support these advanced features.Please use Chrome 116+, Firefox 139+, or Edge 141+ on a device with platform authentication (Face ID, Touch ID, Windows Hello, etc.)."

by JohnFenon 1/13/2026, 2:31 PM

Unless I misunderstand, this doesn't seem to address what I consider to be the largest privacy risk: the information you're providing to the LLM itself. Is there even a solution to that problem?

I mean, e2ee is great and welcome, of course. That's a wonderful thing. But I need more.

by jeroenhdon 1/13/2026, 3:33 PM

An interesting take on the AI model. I'm not sure what their business model is like, as collecting training data is the one thing that free AI users "pay" in return for services, but at least this chat model seems honest.

Using remote attestation in the browser to attest the server rather than the client is refreshing.

Using passkeys to encrypt data does limit browser/hardware combinations, though. My Firefox+Bitwarden setup doesn't work with this, unfortunately. Firefox on Android also seems to be broken, but Chrome on Android works well at least.

by kfredson 1/16/2026, 2:23 PM

It’s exciting to hear that Moxie and colleagues are working on something like this. They definitely have the skills to pull it off.

Few in this world have done as much for privacy as the people who built Signal. Yes, it’s not perfect, but building security systems with good UX is hard. There are all sorts of tradeoffs and sacrifices one needs to make.

For those interested in the underlying technology, they’re basically combining reproducible builds, remote attestation, and transparency logs. They’re doing the same thing that Apple Private Cloud Compute is doing, and a few others. I call it system transparency, or runtime transparency. Here’s a lighting talk I did last year: https://youtu.be/Lo0gxBWwwQE

by lrvickon 1/16/2026, 1:18 PM

What he did with messaging... So he will centralize all of it with known broken SGX metadata protections, weak supply chain integrity, and a mandate everyone supply their phone numbers and agree to Apple or Google terms of service to use it?

by frankdiloon 1/16/2026, 1:46 PM

I do wonder what models it uses under the hood.

ChatGPT already knows more about me than Google did before LLMs, but would I switch to inferior models to preserve privacy? Hard tradeoff.

by AdmiralAsshaton 1/13/2026, 1:22 PM

Well, if anyone could do it properly, Moxie certainly has the track record.

by paxyson 1/13/2026, 7:56 PM

"trusted execution environment" != end-to-end encryption

The entire point of E2EE is that both "ends" need to be fully under your control.

by colesantiagoon 1/16/2026, 1:30 PM

The website is: https://confer.to/

"Confer - Truly private AI. Your space to think."

"Your Data Remains Yours, Never trained on. Never sold. Never shared. Nobody can access it but you."

"Continue With Google"

Make of that what you will.

by pona-aon 1/16/2026, 5:04 PM

Collecting the email doesn't inspire much confidence. An account-number model like Mullvad's would seem preferable, or you could go all-in on syncable passkeys as the only user identifier.

The web app itself feels poorly made—almost vibe-coded in places: nonsensical gradients, UI elements rendering in flashes of white, and subtly off margins and padding.

The model itself is unknown, but speaks with the cadence reminiscent of GPT-4o.

I'm no expert, but calling this "end-to-end encrypted" is only accurate if one end is your client and the other is a very much interposable GPU (assuming vendor’s TEE actually works—something that, in light of tee.fail, feels rather optimistic).

by jdthediscipleon 1/13/2026, 8:44 PM

The best private LLM is the one you host yourself.

by throwaway35636on 1/13/2026, 9:08 PM

Interestingly the confer image on GitHub doesn’t seem to include in the attestation the model weights (they seem loaded from a mounted ext4 disk without dm-verity). Probably this doesn’t compromise the privacy of the communication (as long as the model format is not containing any executable part) but it exposes users to a “model swapping” attack, where the confer operator makes a user talk to an “evil” model without they can notice it. Such evil model may be fine tuned to provide some specifically crafted output to the user. Authenticating the model seems important, maybe it is done at another level of the stack?

by slipheenon 1/13/2026, 8:19 PM

Does it say anywhere which model it’s using?

I see references to vLLM in the GitHub but not which actual model (Llama, Mistral, etc.) or if they have a custom fine tune, or you give your own huggingface link?

by piloto_ciegoon 1/16/2026, 5:30 PM

I really really want this, however keypass doesn't work with bitwarden and no, I'm not moving to 1Password.

by LordDragonfangon 1/13/2026, 8:09 PM

> Advanced Passkey Features Required

> This application requires passkey with PRF extension support for secure encryption key storage. Your browser or device doesn't support these advanced features.

> Please use Chrome 116+, Firefox 139+, or Edge 141+ on a device with platform authentication (Face ID, Touch ID, Windows Hello, etc.).

(Running Chrome 143)

So... does this just not support desktops without overpriced webcams, or am I missing something?

by jmathaion 1/14/2026, 5:16 AM

I am super curious about this. I wonder baseline it needs to meet to pull me away from using ChatGPT or Claude.

My usage of it would be quite different than ChatGPT. I’d be much freer in what I ask it.

I think there’s a real opportunity for something like this. I would have thought Apple would have created it but they just announced they’ll use Gemini.

Awesome launch Moxie!

by jeroadhdon 1/13/2026, 7:21 PM

Again with the confidential VM and remote attestation crypto theater? Moxie has a good track record in general, and yet he seems to have a huge blindspot in trusting Intel broken "trusted VM" computing for some inexplicable reason. He designed the user backups of Signal messages to server with similar crypto secure "enclave" snake-oil.

by imustachyouon 1/16/2026, 2:28 PM

I’m missing something, won’t the input to the llm necessarily be plaintext? And the output too? Then, as long as the llm has logs, the real input by users will be available somewhere in their servers

by jrm4on 1/13/2026, 9:49 PM

Aha. This, ideally, is a job for local only. Ollama et al.

Now, of course, it is in question as to whether my little graphics card can reasonably compare to a bigger cloud thing (and for me presently a very genuine question) but that really should be the gold standard here.

by orbital-decayon 1/13/2026, 8:46 PM

At least Cocoon and similar services relying on TEE don't call this end-to-end encryption. Hardware DRM is not E2EE, it's security by obscurity. Not to say it doesn't work, but it doesn't provide mathematically strong guarantees either.

by hiimkekson 1/13/2026, 7:18 PM

I am confused. I get E2EE chat with a TEE, but the TEEs I know of (admittedly not an expert) are not powerful enough to do the actual inference, at least not any useful one. The blog posts published so far just glance over that.

by f_allweinon 1/13/2026, 2:14 PM

Interesting! I wonder a) how much of an issue this addresses, ie how much are people worried about privacy when they use other LLMs? and b) how much of a disadvantage it is for Confer not to be able to read/ train in user data.

by lsofzzon 1/14/2026, 9:06 AM

MM is basically up-selling his _Signal_ trust score. Granted, Signal/RedPhone predecessor upped the game but calling this E2E encrypted AI chat is a bit of a stretch..

by saurikon 1/14/2026, 3:13 AM

I am shocked at how quickly everyone is trying to forget that TEE.fail happened, and so now this technology doesn't prove anything. I mean, it isn't useless, but DNS/TLS and physical security/trust become load bearing, to the point where the claims made by these services are nonsensical/dishonest.

by letmetweakiton 1/13/2026, 8:07 PM

How does inference work with a TEE, isn’t performance a lot more restricted?

by 4d4mon 1/14/2026, 6:03 PM

Has anyone gotten access yet?

by moralestapiaon 1/16/2026, 1:52 PM

Backdoor it?

by throwpoasteron 1/16/2026, 1:32 PM

Add a defunct cryptotoken?

by voidfuncon 1/16/2026, 1:58 PM

Do what he did for messaging? Make a thing almost nobody uses?

by b65e8bee43c2ed0on 1/16/2026, 1:19 PM

what did he do for messaging? Signal is hardly more private than goddamn Whatsapp. in fact, given that Whatsapp had not been heavily shilled as the "totally private messenger for journalists and whistleblowers :^)" by the establishment media, I distrust it less.

edit @ -4 points: please go ahead and explain why does Signal need your phone number and reject third party clients.