Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,136 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
---
|
| 4 |
+
<div align="center">
|
| 5 |
+
<picture>
|
| 6 |
+
<source srcset="https://github.com/XiaomiMiMo/MiMo-VL/raw/main/figures/Xiaomi_MiMo_darkmode.png?raw=true" media="(prefers-color-scheme: dark)">
|
| 7 |
+
<img src="https://github.com/XiaomiMiMo/MiMo-VL/raw/main/figures/Xiaomi_MiMo.png?raw=true" width="60%" alt="Xiaomi-MiMo" />
|
| 8 |
+
</picture>
|
| 9 |
+
</div>
|
| 10 |
+
|
| 11 |
+
<h3 align="center">
|
| 12 |
+
<b>
|
| 13 |
+
<span>βββββββββββββββββββββββββββββββββββββββββ</span>
|
| 14 |
+
<br/>
|
| 15 |
+
MiMo Audio: Audio Language Models are Few-Shot Learners
|
| 16 |
+
<br/>
|
| 17 |
+
<span>βββββββββββββββββββββββββββββββββββββββββ</span>
|
| 18 |
+
<br/>
|
| 19 |
+
</b>
|
| 20 |
+
</h3>
|
| 21 |
+
|
| 22 |
+
<br/>
|
| 23 |
+
|
| 24 |
+
<div align="center" style="line-height: 1;">
|
| 25 |
+
|
|
| 26 |
+
<a href="https://huggingface.co/collections/XiaomiMiMo/mimo-audio-68cc7202692c27dae881cce0" target="_blank">π€ HuggingFace</a>
|
| 27 |
+
|
|
| 28 |
+
<a href="https://github.com/XiaomiMiMo/MiMo-Audio/blob/main/MiMo-Audio-Technical-Report.pdf" target="_blank">π Paper</a>
|
| 29 |
+
|
|
| 30 |
+
<a href="https://xiaomimimo.github.io/MiMo-Audio-Demo" target="_blank">π° Blog</a>
|
| 31 |
+
|
|
| 32 |
+
<a href="https://huggingface.co/spaces/XiaomiMiMo/mimo_audio_chat" target="_blank">π₯ Online Demo</a>
|
| 33 |
+
|
|
| 34 |
+
<a href="https://github.com/XiaomiMiMo/MiMo-Audio-Eval" target="_blank">π MiMo-Audio-Eval</a>
|
| 35 |
+
|
|
| 36 |
+
|
| 37 |
+
<br/>
|
| 38 |
+
</div>
|
| 39 |
+
|
| 40 |
+
<br/>
|
| 41 |
+
|
| 42 |
+
## Introduction
|
| 43 |
+
|
| 44 |
+
Existing audio language models typically rely on task-specific fine-tuning to accomplish particular audio tasks. In contrast, humans are able to generalize to new audio tasks with only a few examples or simple instructions. GPT-3 has shown that scaling next-token prediction pretraining enables strong generalization capabilities in text, and we believe this paradigm is equally applicable to the audio domain. By scaling MiMo-Audio's pretraining data to over one hundred million of hours, we observe the emergence of few-shot learning capabilities across a diverse set of audio tasks. We develop a systematic evaluation of these capabilities and find that MiMo-Audio-7B-Base achieves SOTA performance on both speech intelligence and audio understanding benchmarks among open-source models. Beyond standard metrics, MiMo-Audio-7B-Base generalizes to tasks absent from its training data, such as voice conversion, style transfer, and speech editing. MiMo-Audio-7B-Base also demonstrates powerful speech continuation capabilities, capable of generating highly realistic talk shows, recitations, livestreaming and debates. At the post-training stage, we curate a diverse instruction-tuning corpus and introduce thinking mechanisms into both audio understanding and generation. MiMo-Audio-7B-Instruct achieves open-source SOTA on audio understanding benchmarks, spoken dialogue benchmarks and instruct-TTS evaluations, approaching or surpassing closed-source models.
|
| 45 |
+
|
| 46 |
+
<p align="center">
|
| 47 |
+
<img width="95%" src="https://github.com/XiaomiMiMo/MiMo-Audio/blob/main/assets/Results.png?raw=true">
|
| 48 |
+
</p>
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
## Architecture
|
| 53 |
+
### MiMo-Audio-Tokenizer
|
| 54 |
+
MiMo-Audio-Tokenizer is a 1.2B-parameter Transformer operating at 25 Hz. It employs an eight-layer RVQ stack to generate 200 tokens per second. By jointly optimizing semantic and reconstruction objectives, we train MiMo-Audio-Tokenizer from scratch on a 10-million-hour corpus, achieving superior reconstruction quality and facilitating downstream language modeling.
|
| 55 |
+
|
| 56 |
+
<p align="center">
|
| 57 |
+
<img width="95%" src="https://github.com/XiaomiMiMo/MiMo-Audio/blob/main/assets/tokenizer.png?raw=true">
|
| 58 |
+
</p>
|
| 59 |
+
|
| 60 |
+
MiMo-Audio couples a patch encoder, an LLM, and a patch decoder to improve modeling efficiency for high-rate sequences and bridge the length mismatch between speech and text. The patch encoder aggregates four consecutive time steps of RVQ tokens into a single patch, downsampling the sequence to a 6.25 Hz representation for the LLM. The patch decoder autoregressively generates the full 25 Hz RVQ token sequence via a delayed-generation scheme.
|
| 61 |
+
### MiMo-Audio
|
| 62 |
+
<p align="center">
|
| 63 |
+
<img width="95%" src="https://github.com/XiaomiMiMo/MiMo-Audio/blob/main/assets/architecture.png?raw=true">
|
| 64 |
+
</p>
|
| 65 |
+
|
| 66 |
+
## Explore MiMo-Audio Now! πππ
|
| 67 |
+
- π§ **Try the Hugging Face demo:** [MiMo-Audio Demo](https://huggingface.co/spaces/XiaomiMiMo/mimo_audio_chat)
|
| 68 |
+
- π° **Read the Official Blog:** [MiMo-Audio Blog](https://xiaomimimo.github.io/MiMo-Audio-Demo)
|
| 69 |
+
- π **Dive into the Technical Report:** [MiMo-Audio Technical Report](https://github.com/XiaomiMiMo/MiMo-Audio/blob/main/MiMo-Audio-Technical-Report.pdf)
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
## Model Download
|
| 73 |
+
| Models | π€ Hugging Face |
|
| 74 |
+
|-------|-------|
|
| 75 |
+
| MiMo-Audio-Tokenizer | [XiaomiMiMo/MiMo-Audio-Tokenizer](https://huggingface.co/XiaomiMiMo/MiMo-Audio-Tokenizer) |
|
| 76 |
+
| MiMo-Audio-7B-Base | [XiaomiMiMo/MiMo-Audio-7B-Base](https://huggingface.co/XiaomiMiMo/MiMo-Audio-7B-Base) |
|
| 77 |
+
| MiMo-Audio-7B-Instruct | [XiaomiMiMo/MiMo-Audio-7B-Instruct](https://huggingface.co/XiaomiMiMo/MiMo-Audio-7B-Instruct) |
|
| 78 |
+
|
| 79 |
+
|
| 80 |
+
|
| 81 |
+
## Getting Started
|
| 82 |
+
|
| 83 |
+
Spin up the MiMo-Audio demo in minutes with the built-in Gradio app.
|
| 84 |
+
|
| 85 |
+
### Installation
|
| 86 |
+
``` sh
|
| 87 |
+
git clone https://github.com/XiaomiMiMo/MiMo-Audio.git
|
| 88 |
+
cd MiMo-Audio
|
| 89 |
+
pip install -e .
|
| 90 |
+
```
|
| 91 |
+
### Run the demo
|
| 92 |
+
``` sh
|
| 93 |
+
python run_mimo_audio.py
|
| 94 |
+
```
|
| 95 |
+
|
| 96 |
+
This launches a local Gradio interface where you can try MiMo-Audio interactively.
|
| 97 |
+
|
| 98 |
+
<p align="center">
|
| 99 |
+
<img width="95%" src="https://github.com/XiaomiMiMo/MiMo-Audio/blob/main/assets/demo_ui.jpg?raw=true">
|
| 100 |
+
</p>
|
| 101 |
+
|
| 102 |
+
Enter the local paths for `MiMo-Audio-Tokenizer` and `MiMo-Audio-7B-Instruct`, then enjoy the full functionality of MiMo-Audio!
|
| 103 |
+
|
| 104 |
+
## Inference Scripts
|
| 105 |
+
|
| 106 |
+
### Base Model
|
| 107 |
+
We provide an example script to explore the **in-context learning** capabilities of `MiMo-Audio-7B-Base`.
|
| 108 |
+
See: [`inference_example_pretrain.py`](https://github.com/XiaomiMiMo/MiMo-Audio/blob/main/inference_example_pretrain.py)
|
| 109 |
+
|
| 110 |
+
### Instruct Model
|
| 111 |
+
To try the instruction-tuned model `MiMo-Audio-7B-Instruct`, use the corresponding inference script.
|
| 112 |
+
See: [`inference_example_sft.py`](https://github.com/XiaomiMiMo/MiMo-Audio/blob/main/inference_example_sft.py)
|
| 113 |
+
|
| 114 |
+
|
| 115 |
+
|
| 116 |
+
## Evaluation Toolkit
|
| 117 |
+
Full evaluation suite are available at π[MiMo-Audio-Eval](https://github.com/XiaomiMiMo/MiMo-Audio-Eval).
|
| 118 |
+
|
| 119 |
+
|
| 120 |
+
This toolkit is designed to evaluate MiMo-Audio and other recent audio LLMs as mentioned in the paper. It provides a flexible and extensible framework, supporting a wide range of datasets, tasks, and models.
|
| 121 |
+
|
| 122 |
+
## Citation
|
| 123 |
+
|
| 124 |
+
```bibtex
|
| 125 |
+
@misc{coreteam2025mimoaudio,
|
| 126 |
+
title={MiMo-Audio: Audio Language Models are Few-Shot Learners},
|
| 127 |
+
author={LLM-Core-Team Xiaomi},
|
| 128 |
+
year={2025},
|
| 129 |
+
url={GitHub - XiaomiMiMo/MiMo-Audio},
|
| 130 |
+
}
|
| 131 |
+
```
|
| 132 |
+
|
| 133 |
+
|
| 134 |
+
## Contact
|
| 135 |
+
|
| 136 |
+
Please contact us at [[email protected]](mailto:[email protected]) or open an issue if you have any questions.
|