From 300KB to 69KB per Token: How LLM Architectures Solve the KV Cache Problem
by future-shock-ai on 3/28/2026, 10:42:23 PM
https://news.future-shock.ai/the-weight-of-remembering/
Comments
by: coppsilgold
There are also interesting approaches to more directly compress a large document or an entire codebase into a smaller set of tokens without getting the LLM to wing it. For example, Cartridges: <<a href="https://hazyresearch.stanford.edu/blog/2025-06-08-cartridges" rel="nofollow">https://hazyresearch.stanford.edu/blog/2025-06-08-cartridges</a>><p>They basically get gradient descent to optimize the KV cache while freezing the network.
3/31/2026, 7:51:25 PM
by: az09mugen
Unrelated, but 69KB is how much RAM Voyager 1 has.
3/31/2026, 6:09:19 PM
by: refulgentis
[dead]
3/31/2026, 7:29:26 PM
by: LuxBennu
good overview of the architecture side but worth mentioning there's another axis that stacks on top of all of this: you can quantize the kv cache itself at inference time. in llama.cpp you can run q8 for keys and q4 for values and it cuts cache memory roughly in half again on top of whatever gqa or mla already saves you. i run qwen 70b 4-bit on m2 max 96gb and the kv quant is what actually made longer contexts fit without running out of unified memory. keys need more precision because they drive attention scores but values are way more tolerant of lossy compression, so the asymmetry works out.
3/31/2026, 7:29:28 PM