alphaXiv

Explore

State of the Art

Sign In

Labs

Feedback

Browser Extension

We're hiring
PaperBlogResources

KV Cache Compression for Inference Efficiency in LLMs: A Review

BibTex
Copy
@misc{zhang2025kvcachecompression,
      title={KV Cache Compression for Inference Efficiency in LLMs: A Review},
      author={Shouhua Zhang and Jiehan Zhou and Yanyu Liu and Yitian Zou and Jingying Fu and Sixiang Liu and You Fu},
      year={2025},
      eprint={2508.06297},
      archivePrefix={arXiv},
      primaryClass={cs.DC},
      url={https://arxiv.org/abs/2508.06297},
}
Transform this paper into an audio lecture
Get an engaging lecture and Q&A format to quickly understand the paper in minutes, perfect for learning on the go.
Audio lecture
Q&A format