👉🏻 WenetSpeech-Chuan 👈🏻

Highlight🔥

WenetSpeech-Chuan TTS Models have been released!

Install

Clone and install

  • Clone the repo
git clone https://github.com/ASLP-lab/WenetSpeech-Chuan.git
cd WenetSpeech-Chuan/CosyVoice2-Chuan
  • Create Conda env:
conda create -n cosyvoice python=3.10
conda activate cosyvoice
# pynini is required by WeTextProcessing, use conda to install it as it can be executed on all platform.
conda install -y -c conda-forge pynini==2.1.5
pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com

Model download

from huggingface_hub import snapshot_download
snapshot_download('ASLP-lab/Cosyvoice2-Chuan', local_dir='pretrained_models/Cosyvoice2-Chuan')

Usage

import sys
sys.path.append('third_party/Matcha-TTS')
from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
from cosyvoice.utils.file_utils import load_wav
import torchaudio
import opencc

cosyvoice_base = CosyVoice2(
    'pretrained_models/Cosyvoice2-Chuan',
    load_jit=False, load_trt=False, load_vllm=False, fp16=False
)


prompt_speech_16k = load_wav('asset/sg_017_090.wav', 16000)

text = '我跟你说,四川火锅必须吃牛油锅底,而且必须蘸油碟,你知道了吗?'

for i, j in enumerate(cosyvoice_base.inference_instruct2(text, '用四川话说这句话', prompt_speech_16k, stream=False)):
    torchaudio.save('base_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)



## Contact
If you are interested in leaving a message to our research team, feel free to email [email protected].
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including ASLP-lab/WSChuan-TTS