Omni-AVSR: Towards Unified Multimodal Speech Recognition with Large Language Models
Abstract
Omni-AVSR is a unified audio-visual LLM that efficiently supports ASR, VSR, and AVSR through multi-granularity training and parameter-efficient adaptation, achieving high accuracy with reduced resource use.
Large language models (LLMs) have recently achieved impressive results in speech recognition across multiple modalities, including Auditory Speech Recognition (ASR), Visual Speech Recognition (VSR), and Audio-Visual Speech Recognition (AVSR). Despite this progress, current LLM-based approaches typically address each task independently, training separate models that raise computational and deployment resource use while missing potential cross-task synergies. They also rely on fixed-rate token compression, which restricts flexibility in balancing accuracy with efficiency. These limitations highlight the need for a unified framework that can support ASR, VSR, and AVSR while enabling elastic inference. To this end, we present Omni-AVSR, a unified audio-visual LLM that combines efficient multi-granularity training with parameter-efficient adaptation. Specifically, we adapt the matryoshka representation learning paradigm to efficiently train across multiple audio and visual granularities, reducing its inherent training resource use. Furthermore, we explore three LoRA-based strategies for adapting the backbone LLM, balancing shared and task-specific specialization. Experiments on LRS2 and LRS3 show that Omni-AVSR achieves comparable or superior accuracy to state-of-the-art baselines while training a single model at substantially lower training and deployment resource use. The model also remains robust under acoustic noise, and we analyze its scaling behavior as LLM size increases, providing insights into the trade-off between performance and efficiency.
Community
Project website: https://umbertocappellazzo.github.io/Omni-AVSR
Code and checkpoints: https://github.com/umbertocappellazzo/Omni-AVSR
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MoME: Mixture of Matryoshka Experts for Audio-Visual Speech Recognition (2025)
- Adapting Speech Foundation Models with Large Language Models for Unified Speech Recognition (2025)
- HarmoniFuse: A Component-Selective and Prompt-Adaptive Framework for Multi-Task Speech Language Modeling (2025)
- VOX-KRIKRI: Unifying Speech and Language through Continuous Fusion (2025)
- Mitigating Attention Sinks and Massive Activations in Audio-Visual Speech Recognition with LLMs (2025)
- WAVE: Learning Unified & Versatile Audio-Visual Embeddings with Multimodal LLM (2025)
- Whisper-UT: A Unified Translation Framework for Speech and Text (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper