Battering RAM – Low-cost interposer attacks on confidential computing

by pabs3on 10/6/2025, 7:47 AMwith 59 comments

by fweimeron 10/6/2025, 9:05 AM

I'm kind of confused by AMD's and Intel's response. I thought both companies were building technology that allows datacenter operators to prove to their customers that they do not have access to data processed on the machines, despite having physical access to them. If that's out of scope, what is the purpose of these technologies?

by matjaon 10/6/2025, 9:47 AM

> No, our interposer only works on DDR4

Not surprising - even having 2 DDR5 DIMMs on the same channel compromises signal integrity enough to need to drop the frequency by ~30-40%, so perhaps the best mitigation at the moment is to ensure the host is using the fastest DDR5 available.

So - Is the host DRAM/DIMM technology and frequency included in the remote attestation report for the VM?

by no_timeon 10/6/2025, 8:16 AM

I find it reassuring that you can still get access to the data running on your own device, despite all the tens of thousands of engineering hours being poured into preventing just that.

by schoenon 10/6/2025, 8:07 AM

I think I talked about this possibility with Bunnie Huang about 15 years ago. As I recall, he said it was conceptually achievable. I guess it's also practically achievable!

by munchlaxon 10/6/2025, 11:38 AM

Dupe of: https://news.ycombinator.com/item?id=45439286

11 points by mici 4 days ago

by rhodeyon 10/6/2025, 11:43 AM

I hope people dont give up on TEE, see AWS Nitro

The AWS business is built on isolating compute so IMO AWS are the best choice

I've built up a stack for doing AWS Nitro dev

https://lock.host/

https://github.com/rhodey/lock.host

With Intel and AMD you need the attestation flow to prove not only that you are using the tech but you need to attest to who is hosting the CPU

With Amazon Nitro always Amazon is hosting the CPU

by addaonon 10/6/2025, 11:53 AM

This seems pretty trivial to fix (or at least work around) by adding an enclave generation number to the key initialization inputs. (They mention that the key is only based on the physical address, but surely it has to include CPUID or something similar as well?) Understood that this is likely hardware key generation so won’t be fixed without a change, and that persistent generation counters are a bit of a pain… but what else am I missing?

by Simple8424on 10/6/2025, 8:36 AM

Is this making confidential computing obsolete?

by acidburnNSAon 10/6/2025, 1:50 PM

Damn. I was hoping that confidential compute could allow nuclear reactor design work (export controlled, not classified) to go into the public cloud and avoid the govcloud high premium costs. But this kind of takes the wind out of the idea.

by mike_hearnon 10/6/2025, 2:53 PM

Not a great paper, hence why the "advisories" are so short. All they've done is show that some products meet their advertised threat model. Intel has a solution: upgrade your CPU. AMD do not. Once again Intel are ahead when it comes to confidential computing.

The story here is a little complex. Some years ago I flew out to Oregon and met the designers of SGX. It's a good design and it's to our industries shame that we haven't used it much, as tech like this can solve a lot of different security and privacy problems.

SGX as originally designed was not attackable this way. This kind of RAM interposer attack was anticipated and the hardware was designed to block it by using memory integrity trees, in other words, memory was not only being encrypted by the CPU on the fly (cheap) but RAM was also being hashed into a kind of Merkle tree iirc which the CPU would check on access. So even if you knew the encryption key, you could not overwrite RAM or play games with it. It's often overlooked but encryption doesn't magically make storage immutable. An attacker can still overwrite encrypted data, delete parts, replay messages, redirect your write requests or otherwise mess with it. It takes other cryptographic techniques to block those kinds of activities, and "client SGX" had them (I'm not sure SEV ever did).

This made sense because SGX design followed security best practices, namely, you should minimize the size of the trusted computing base. More code that's trusted = more potential for mistakes = more vulnerabilities. So SGX envisions apps having small trusted "enclaves", sort of like protected kernels, that untrusted code then uses. Cryptography ties the whole thing together. In a model like this an enclave doesn't need a large amount of RAM because the bulk of the app is running outside of the TCB.

Unfortunately, at this point Intel discovered a sad and depressing but fundamental truth about the software industry: our tolerance for taking on additional complexity to increase security rounds to zero, and the enclave programming model is complex. The number of people who actually understand how to use enclaves as a design primitive can probably fit into a single large conference room. The number of apps that used them in the real world, in a way that actually met some kind of useful threat model, I'm pretty sure is actually near zero [1].

This isn't the fault of SGX! From a theoretical perspective, it is sound and the way it was meant to be used is sound. But actually exploiting it properly required more lift than the software industry could give. For example, to obtain the biggest benefits (SaaS you can use without trusting it) would have required some tactical changes to web browsers, changes to databases, changes to how such apps are designed and so on. Nobody tried to coordinate such changes and Intel, being a business, could not afford to wait for a few decades to see if anyone picked up the ball on that (their own software engineering efforts were good as far as they went but not ambitious enough to pull off the vision).

Instead what happened is that potential customers said to them (and AMD): look, we want extra security, but we don't want to make any effort. We want to just run containers/VMs in the cloud and have them be magically secure. Intel looked at what they had and said OK, well, um, I guess we can maybe run bigger apps inside enclaves. Maybe even whole VMs. So they went away and did a redesign, but then they hit a fundamental physics problem: as you expand the amount of encrypted and protected RAM the Merkle tree protecting its integrity gets bigger and bigger. That means every cache miss has to recursively do a tree walk to ensure the data read from RAM is correct. And that kills performance. For small enclaves the tree is shallow and the costs aren't too bad. For big enclaves, well ... the performance rapidly becomes problematic, especially as the software inside expects to be running at full speed (as we are no longer designing with SGX in mind now but just throwing any old stuff into the protected space).

So Intel released a new version gamely called "scalable SGX" which scaled by removing the memory integrity tree. As the point of that tree was to stop bus interposer attacks, they provided an updated threat model that excluded them. The tech is still useful and blocks some attacks (e.g. imagine a corrupted developer on a cloud hypervisor team). But it was no longer as strong as it once was.

Knowing this, they set about creating yet another memory encryption tech called TME-MK which assigns each memory page its own unique encryption key. This prevented the kind of memory relocation attacks the "Battering RAM" interposer is doing. They also released a new tech that is sort of like SGX for whole virtual machines, formally giving up on the idea the software industry would ever actually try to minimize TCBs. Sad, but there we go. Clouds have trusted brands and people aren't bothered by occasional reports of global root exploits in Azure. It would take a step change event to get more serious about this stuff.

[1] You might think Signal would count. Its use of SGX does help to reduce the threat from malicious or hacked cloud operators, but it doesn't protect against the operators of the Signal service themselves as they control the client.

by exabrialon 10/6/2025, 1:21 PM

Physical access owns. If the computer can’t trust its components what can it do?

by commandersakion 10/6/2025, 9:53 AM

I like how the FAQ doesn't really actually answer the questions (feels like AI slop but giving benefit of the doubt), so I will answer on their behalf, without even reading the paper:

Am I impacted by this vulnerability?

For all intents and purposes, no.

Battering RAM needs physical access; is this a realistic attack vector?

For all intents and purposes, no.