alphaXiv

Explore

State of the Art

Sign In

Labs

Feedback

Browser Extension

We're hiring
PaperBlogResources

ChunkKV: Semantic-Preserving KV Cache Compression for Efficient Long-Context LLM Inference

BibTex
Copy
@misc{li2025chunkkvsemanticpreservingkv,
      title={ChunkKV: Semantic-Preserving KV Cache Compression for Efficient Long-Context LLM Inference}, 
      author={Bo Li and Xiang Liu and Zeyu Li and Zhenheng Tang and Xiaowen Chu and Xuming Hu and Peijie Dong},
      year={2025},
      eprint={2502.00299},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.00299}, 
}
GitHub
kvpress
651
HTTPS
https://github.com/NVIDIA/kvpress
SSH
git@github.com:NVIDIA/kvpress.git
CLI
gh repo clone NVIDIA/kvpress
Transform this paper into an audio lecture
Get an engaging lecture and Q&A format to quickly understand the paper in minutes, perfect for learning on the go.
Audio lecture
Q&A format