--- license: apache-2.0 --- ## 👉🏻 WenetSpeech-Chuan 👈🏻 ## Highlight🔥 **WenetSpeech-Chuan TTS Models** have been released! ## Install **Clone and install** - Clone the repo ``` sh git clone https://github.com/ASLP-lab/WenetSpeech-Chuan.git cd WenetSpeech-Chuan/CosyVoice2-Chuan ``` - Create Conda env: ``` sh conda create -n cosyvoice python=3.10 conda activate cosyvoice # pynini is required by WeTextProcessing, use conda to install it as it can be executed on all platform. conda install -y -c conda-forge pynini==2.1.5 pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com ``` **Model download** ``` python from huggingface_hub import snapshot_download snapshot_download('ASLP-lab/Cosyvoice2-Chuan', local_dir='pretrained_models/Cosyvoice2-Chuan') ``` **Usage** ``` python import sys sys.path.append('third_party/Matcha-TTS') from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2 from cosyvoice.utils.file_utils import load_wav import torchaudio import opencc cosyvoice_base = CosyVoice2( 'pretrained_models/Cosyvoice2-Chuan', load_jit=False, load_trt=False, load_vllm=False, fp16=False ) prompt_speech_16k = load_wav('asset/sg_017_090.wav', 16000) text = '我跟你说,四川火锅必须吃牛油锅底,而且必须蘸油碟,你知道了吗?' for i, j in enumerate(cosyvoice_base.inference_instruct2(text, '用四川话说这句话', prompt_speech_16k, stream=False)): torchaudio.save('base_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate) ## Contact If you are interested in leaving a message to our research team, feel free to email ziyu_zhang@mail.nwpu.edu.cn.