alphaXiv

Explore

State of the Art

Sign In

Labs

Feedback

Browser Extension

We're hiring
PaperBlogResources

TPLA: Tensor Parallel Latent Attention for Efficient Disaggregated Prefill and Decode Inference

BibTex
Copy
@misc{meng2025tplatensorparallel,
      title={TPLA: Tensor Parallel Latent Attention for Efficient Disaggregated Prefill \& Decode Inference},
      author={Fanxu Meng and Muhan Zhang and Yuxuan Wang and Di Yin and Xing Sun and Xiaojuan Tang and Pingzhi Tang},
      year={2025},
      eprint={2508.15881},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2508.15881},
}
GitHub
TransMLA
410
HTTPS
https://github.com/MuLabPKU/TransMLA
SSH
git@github.com:MuLabPKU/TransMLA.git
CLI
gh repo clone MuLabPKU/TransMLA
Transform this paper into an audio lecture
Get an engaging lecture and Q&A format to quickly understand the paper in minutes, perfect for learning on the go.
Audio lecture
Q&A format