Efficient AI: KV Caching and KV Sharing

by gauravmon 8/7/2025, 3:10 AMwith 1 comments

by gauravmon 8/7/2025, 3:10 AM

New blog post on Efficient AI Techniques: KV Caching and KV Sharing.

Efficient training and inference is table stakes for LLMs these days, and these two algorithmic efficiency techniques work really well for reducing LLM latency as well as memory usage, while retaining the model performance. Feel free to give it a read, and drop a note if I missed something.