Is there a reason why dialback isn't the answer?
I would think it's more secure than clientAuth certs because if an attacker gets a misissued cert they'd have to actually execute a MitM attack to use it. In contrast, with a misissued clientAuth cert they can just connect to the server and present it.
Another fun fact: the Mozilla root store, which I'd guess the vast majority of XMPP servers are using as their trust store, has ZERO rules governing clientAuth issuance[1]. CAs are allowed to issue clientAuth-only certificates under a technically-constrained non-TLS sub CA to anyone they want without any validation (as long as the check clears ;-). It has never been secure to accept the clientAuth EKU when using the Mozilla root store.
[1] https://www.mozilla.org/en-US/about/governance/policies/secu...
> The current CA ecosystem is *heavily* driven by web browser vendors (i.e. Google, Apple, Microsoft and Mozilla), and they are increasingly hostile towards non-browser applications using certificates from CAs that they say only provide certificates for consumption by web browsers.
Let's translate and simplify:
> The current CA ecosystem is Google. They want that only Google-applications get certificates from CAs.
Why did LE make this change? It feels like a rather deliberate attack on the decentralised web.
I can think of a of other ways that client certificates could work, but they have problems too:
1. Use DANE to verify the client certificate. But that requires DNSSEC, which isn't widely used. Would probably require new implemntations of the handshake to check the client cert, and would add latency since the server has to do a DNS call to verify the clients cer.
2. When the server receives a request it makes an https request to a well known enpdpoint on the domain in the client-cert's subject that contains a CA, it then checks that the client cert is signed by that CA. And the client generates the client cert with that CA (or even uses the same self-signed cert for both). This way the authenticity of the client CA is verified using the web PKI cert. But the implementation is kind of complicated, and has an even worse latency problem than 1.
3. The server has an endpoint where a client can request a client certificate from that server, probably with a fairly short expiration, for a domain, with a csr, or equivalent. The server then responds by making an https POST operation to a well known enpdpoint on the requested domain containing a certificate signed by the servers own CA. But for that to work, the registration request needs to be unauthenticated, and could possibly be vulnerable to DoS attacks. It also requires state on the client side, to connect the secret key with the final cert (unless the server generated a new secret key for the client, which probably isn't ideal). And the client should probably cache the cert until it expires.
And AFAIK, all of these would require changes to how XMPP and other federated protocols work.
Prosody is also the base of Snikket[1], a popular recent XMPP server. Snikket is basically just a Prosody config.[2]
[1] https://snikket.org/service/quickstart/
[2] https://github.com/snikket-im/snikket-server/blob/master/ans...
From https://letsencrypt.org/2025/05/14/ending-tls-client-authent...
"This change is prompted by changes to Google Chrome’s root program requirements, which impose a June 2026 deadline to split TLS Client and Server Authentication into separate PKIs. Many uses of client authentication are better served by a private certificate authority, and so Let’s Encrypt is discontinuing support for TLS Client Authentication ahead of this deadline."
TL;DR blame Google
The problem here is that when alice@chat.example.com and bob@xmpp.example2.com talk to each other, chat.example.com asks "Are you xmpp.example2.com?" and xmpp.example2.com asks "Are you chat.example.com?"
If you strictly require the side that opens the TCP connection to only use client certs and require the side that gets the TCP connection to only use server certs, then workflows where both sides validate each other become impossible with a single connection.
You could have each server open a TCP connection to the other, but then you have a single conversation spread across multiple connections. It gets messy fast, especially if you try to scale beyond a single server -- the side that initiates the first outgoing connection has to receive the second incoming connection, so you have to somehow get your load balancer to match the second connection with the first and route it to the same box.
Then at the protocol level, you'd have to essentially have each connection's server send a random number challenge to the client saying "I can't authenticate clients because they don't have certs. So please echo this back on the other connection where you're the server and I can authenticate you." The complexity and subtlety of this coordination dance seems like you're just asking security issues.
If I was implementing XMPP I would be very tempted to say, "Don't be strict about client vs. server certs, let a client use a server cert to demonstrate ownership of a domain -- even if it's forbidden by RFC and even if we have to patch our TLS library to do it."
For those wondering if ejabberd Debian systems will be impacted, it seems like for now there no fix, the issue is being tracked here: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1127369
Client authentication with publicly-trusted (i.e. chaining to roots in one of the major 4 or 5 trust-store programs) is bad. It doesn't actually authenticate anything at all, and never has.
No-one that uses it is authenticating anything more than the other party has an internet connection and the ability, perhaps, to read. No part of the Subject DN or SAN is checked. It's just that it's 'easy' to rely on an existing trust-store rather than implement something secure using private PKI.
Some providers who 'require' public TLS certs for mTLS even specify specific products and CAs (OV, EV from specific CAs) not realising that both the CAs and the roots are going to rotate more frequently in future.
I feel like using web pki for client authentication doesn't really make sense in the first place. How do you verify the common name/subject alt name actually matches when using a client cert.
Using web pki for client certs seems like a recipe for disaster. Where servers would just verify they are signed but since anyone can sign then anyone can spoof.
And this isn't just hypothetical. I remember xmlsec (a library for validating xml signature, primarily saml) used to use web pki for signature validation in addition to specified cert, which resulted in lot SAML bypasses where you could pass validation by signing the SAML response with any certificate from lets encrypt including the attackers.
Is there any reason why things gravitate towards being web-centric, especially Google-centric? Seeing that Google's browser policies triggered the LE change and the fact that most CAs are really just focusing on what websites need rather than non-web services isn't helpful considering that browsers now are terribly inefficient (I mean come on, 1GB of RAM for 3 tabs of Firefox whilst still buffering?!) yet XMPP is significantly more lightweight and yet more featureful compared to say Discord.
Shame LE didn't give people option to generate client and client+server auth certs
I wonder if issues like this couldn't be a use case for DANE.
Basically they will break all MTLS usage with the certificates.
I really fail to understand or sympathize with Let's Encrypt limiting their certs so. What is gained by slamming the door on other applications than servers being able to get certs?
In this case I do think it makes sense for servers to accept certs even as marked by servers, since it's for a s2s use case. But this just feels like such an unnecessary clamping down. To have made certs finally plentiful, & available for use... Then to take that away? Bother!
What is the point of restricting a certificate to "server" or "client" use, anyway?
I like how the article describes how certificates work for both client and server. I know a little bit about it but what I read helps to reinforce what I already know and it taught me something new. I appreciate it when someone takes the time to explain things like this.