--- library_name: transformers license: apache-2.0 base_model: - meta-llama/Llama-3.1-8B-Instruct --- # Meta-Llama-3.1-8B-Instruct-Add-Speech-Token-4096-Nostrip ## Introduction This repo contains the **Meta-Llama-3.1-8B-Instruct-Add-Speech-Token-4096-Nostrip** model utilized to train the [EMOVA](https://huggingface.co/collections/Emova-ollm/emova-models-67779d377bb8261e6057a320) series of models. Based on the original [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) checkpoint, we further insert speech tokens into its vocabulary for end-to-end omni-modal alignment as follows. The total number of speech tokens in [EMOVA speech tokenizer](https://huggingface.co/Emova-ollm/emova_speech_tokenizer_hf) is 4096. Therefore, it should be utilized as initialization in the **Stage 2: Omni-modal text-centric alignment** of EMOVA training. ```bash # Source code can be found https://github.com/emova-ollm/EMOVA#insert-speech-tokens-into-llm-vocabulary python scripts/insert_speech_token.py \ --origin_model_path meta-llama/Llama-3.1-8B-Instruct \ --saved_model_path ./Meta-Llama-3.1-8B-Instruct_add_speech_token_4096_nostrip \ --num_speech_tokens 4096 ``` ## Usage To train EMOVA with Meta-Llama-3.1-8B-Instruct_add_speech_token_4096_nostrip, we need to create a new model config, and set the **language_model** parameters as follows. An example is provided [here](https://github.com/emova-ollm/EMOVA/blob/main/configs/_base_/models/llama3_1_internvit_anyres.py). Check more details on training EMOVA in our [github repo](https://github.com/emova-ollm/EMOVA#training-emova). ```python language_model=dict( type='EmovaLlamaForCausalLM', -- Wrapper class type for EMOVA pretrained_model_name_or_path='Emova-ollm/Meta-Llama-3.1-8B-Instruct_add_speech_token_4096_nostrip', -- HuggingFace repo of pre-trained LLM attn_implementation="flash_attention_2", -- Attention type from_pretrained=True, -- Load pre-trained weights ), ``` ## Citation ```bibtex @article{chen2024emova, title={Emova: Empowering language models to see, hear and speak with vivid emotions}, author={Chen, Kai and Gou, Yunhao and Huang, Runhui and Liu, Zhili and Tan, Daxin and Xu, Jing and Wang, Chunwei and Zhu, Yi and Zeng, Yihan and Yang, Kuo and others}, journal={arXiv preprint arXiv:2409.18042}, year={2024} } @article{grattafiori2024llama, title={The llama 3 herd of models}, author={Grattafiori, Aaron and Dubey, Abhimanyu and Jauhri, Abhinav and Pandey, Abhinav and Kadian, Abhishek and Al-Dahle, Ahmad and Letman, Aiesha and Mathur, Akhil and Schelten, Alan and Vaughan, Alex and others}, journal={arXiv preprint arXiv:2407.21783}, year={2024} } ```