UniQL: Unified Quantization and Low-rank Compression for Adaptive Edge LLMs
Abstract
UniQL, a unified post-training quantization and low-rank compression framework, enhances the deployment of large language models on mobile devices by reducing memory usage and improving token throughput while maintaining accuracy.
Deploying large language model (LLM) models on mobile platforms faces significant challenges due to the limited memory and shared computational resources of the device. Resource availability may be an issue as it is directly impacted by the current device workload, adding to the uncertainty of model deployment. We introduce UniQL, a unified post-training quantization and low-rank compression framework with on-device configurable pruning rates for edge LLMs. UniQL is a general framework that integrates quantization and low-rank compression for Transformers, State Space Models (SSMs), and hybrid models to support diverse edge applications. In our proposed joint framework, we introduce an efficient structured weight-sorting method that speeds up computation by 20x, quantization-aware singular value decomposition (SVD) to minimize quantization errors, state-aware weight sorting for SSMs, and a fused rotary positional embedding (RoPE) kernel for pruned models. Our framework performs weight-sorting, fine-tuning, and quantization in the cloud in a single-pass workflow, while enabling on-device configurable pruning rates up to 35%. Our experiments show that quantized and pruned models achieve a memory reduction of 4x-5.7x and a token-throughput improvement of 2.7x-3.4x, maintaining accuracy within 5% of the original models at 15% pruning across Transformers (Llama3 and Qwen2.5), SSMs (Mamba2), and hybrid models (Nemotron-H and Bamba-v2). The code and quantized models are available at: https://github.com/enyac-group/UniQL.
Community
๐ support Transformers, State Space Models (SSMs), and hybrid architectures
โ๏ธ Efficient, quantization-friendly pruning algorithms
๐ One-pass framework for quantization + structured low-rank pruning
๐ฑ On-device adaptive pruning driven by real-time memory availability
โก 2.7รโ3.4ร latency speedups
๐ง 4รโ5.7ร memory reductions
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- VecInfer: Efficient LLM Inference with Low-Bit KV Cache via Outlier-Suppressed Vector Quantization (2025)
- AMAQ: Adaptive Mixed-bit Activation Quantization for Collaborative Parameter Efficient Fine-tuning (2025)
- Mixed-Precision Quantization for Language Models: Techniques and Prospects (2025)
- XQuant: Achieving Ultra-Low Bit KV Cache Quantization with Cross-Layer Compression (2025)
- SpecQuant: Spectral Decomposition and Adaptive Truncation for Ultra-Low-Bit LLMs Quantization (2025)
- SLMQuant: Benchmarking Small Language Model Quantization for Practical Deployment (2025)
- SALS: Sparse Attention in Latent Space for KV cache Compression (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper