TPLA: Tensor Parallel Latent Attention for Efficient Disaggregated Prefill \& Decode Inference
Abstract
Tensor-Parallel Latent Attention (TPLA) enhances tensor parallelism efficiency by partitioning latent representations and input dimensions, preserving the benefits of compressed key-value caches while maintaining strong representational capacity.
Multi-Head Latent Attention (MLA), introduced in DeepSeek-V2, compresses key-value states into a low-rank latent vector, caching only this vector to reduce memory. In tensor parallelism (TP), however, attention heads are computed across multiple devices, and each device must load the full cache, eroding the advantage of MLA over Grouped Query Attention (GQA). We propose Tensor-Parallel Latent Attention (TPLA): a scheme that partitions both the latent representation and each head's input dimension across devices, performs attention independently per shard, and then combines results with an all-reduce. TPLA preserves the benefits of a compressed KV cache while unlocking TP efficiency. Unlike Grouped Latent Attention (GLA), every head in TPLA still leverages the full latent representation, maintaining stronger representational capacity. TPLA is drop-in compatible with models pre-trained using MLA: it supports MLA-style prefilling and enables efficient tensor-parallel decoding without retraining. Applying simple orthogonal transforms -- e.g., the Hadamard transform or PCA -- before TP slicing further mitigates cross-shard interference, yielding minimal accuracy degradation. By reducing the per-device KV cache for DeepSeek-V3 and Kimi-K2, we achieve 1.79x and 1.93x speedups, respectively, at a 32K-token context length while maintaining performance on commonsense and LongBench benchmarks. TPLA can be implemented with FlashAttention-3, enabling practical end-to-end acceleration.
Community
Latent KV Cache Attention optimized for Tensor Parallelism and PD decoupling
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- The New LLM Bottleneck: A Systems Perspective on Latent Attention and Mixture-of-Experts (2025)
- Helix Parallelism: Rethinking Sharding Strategies for Interactive Multi-Million-Token LLM Decoding (2025)
- Share Your Attention: Transformer Weight Sharing via Matrix-based Dictionary Learning (2025)
- HGCA: Hybrid GPU-CPU Attention for Long Context LLM Inference (2025)
- A Random Matrix Theory Perspective on the Learning Dynamics of Multi-head Latent Attention (2025)
- HCAttention: Extreme KV Cache Compression via Heterogeneous Attention Computing for LLMs (2025)
- EARN: Efficient Inference Acceleration for LLM-based Generative Recommendation by Register Tokens (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper