Papers
arxiv:2512.03383

UniQL: Unified Quantization and Low-rank Compression for Adaptive Edge LLMs

Published on Dec 3
ยท Submitted by Hung-Yueh Chiang on Dec 4
Authors:
,
,
,
,

Abstract

UniQL, a unified post-training quantization and low-rank compression framework, enhances the deployment of large language models on mobile devices by reducing memory usage and improving token throughput while maintaining accuracy.

AI-generated summary

Deploying large language model (LLM) models on mobile platforms faces significant challenges due to the limited memory and shared computational resources of the device. Resource availability may be an issue as it is directly impacted by the current device workload, adding to the uncertainty of model deployment. We introduce UniQL, a unified post-training quantization and low-rank compression framework with on-device configurable pruning rates for edge LLMs. UniQL is a general framework that integrates quantization and low-rank compression for Transformers, State Space Models (SSMs), and hybrid models to support diverse edge applications. In our proposed joint framework, we introduce an efficient structured weight-sorting method that speeds up computation by 20x, quantization-aware singular value decomposition (SVD) to minimize quantization errors, state-aware weight sorting for SSMs, and a fused rotary positional embedding (RoPE) kernel for pruned models. Our framework performs weight-sorting, fine-tuning, and quantization in the cloud in a single-pass workflow, while enabling on-device configurable pruning rates up to 35%. Our experiments show that quantized and pruned models achieve a memory reduction of 4x-5.7x and a token-throughput improvement of 2.7x-3.4x, maintaining accuracy within 5% of the original models at 15% pruning across Transformers (Llama3 and Qwen2.5), SSMs (Mamba2), and hybrid models (Nemotron-H and Bamba-v2). The code and quantized models are available at: https://github.com/enyac-group/UniQL.

Community

Paper author Paper submitter

๐Ÿ“š support Transformers, State Space Models (SSMs), and hybrid architectures
โœ‚๏ธ Efficient, quantization-friendly pruning algorithms
๐Ÿ”— One-pass framework for quantization + structured low-rank pruning
๐Ÿ“ฑ On-device adaptive pruning driven by real-time memory availability
โšก 2.7ร—โ€“3.4ร— latency speedups
๐Ÿง  4ร—โ€“5.7ร— memory reductions

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2512.03383 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.03383 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.03383 in a Space README.md to link it from this page.

Collections including this paper 1