---
language:
- ca
datasets:
- projecte-aina/3catparla_asr
- projecte-aina/corts_valencianes_asr_a
tags:
- audio
- automatic-speech-recognition
- whisper-large-v3
- barcelona-supercomputing-center
license: apache-2.0
library_name: transformers
base_model:
- openai/whisper-large-v3
---
# faster-whisper-3cat-cv21-valencian
## Table of Contents
Click to expand
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-uses-and-limitations)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
- [Conversion Details](#conversion-details)
- [Citation](#citation)
- [Additional Information](#additional-information)
## Model Description
The "BSC-LT/faster-whisper-3cat-cv21-valencian" is an acoustic model based on a [faster-whisper](https://github.com/guillaumekln/faster-whisper/tree/master) version of [BSC-LT/whisper-3cat-cv21-valencian](https://huggingface.co/langtech-veu/whisper-3cat-cv21-valencian)
## Intended Uses and Limitations
This model is the result of converting the [BSC-LT/whisper-3cat-cv21-valencian](https://huggingface.co/langtech-veu/whisper-3cat-cv21-valencian) into a lighter model using a Python module called [faster-whisper](https://github.com/guillaumekln/faster-whisper/tree/master).
The model can be used for Automatic Speech Recognition (ASR) in Catalan, especially in the Valencian accent. The model intends to transcribe Catalan audio files to plain text without punctuation.
### Installation
To use this model, you may install [faster-whisper](https://github.com/guillaumekln/faster-whisper/tree/master)
Create a virtual environment:
```bash
python -m venv /path/to/venv
```
Activate the environment:
```bash
source /path/to/venv/bin/activate
```
Install the modules:
```bash
pip install faster-whisper
```
### For Inference
To transcribe audio in Catalan using this model, you can follow this example:
```python
from faster_whisper import WhisperModel
model_size = "BSC-LT/faster-whisper-3cat-cv21-valencian"
# Run on GPU with FP16
model = WhisperModel(model_size, device="cuda", compute_type="float16")
# or run on GPU with INT8
#model = WhisperModel(model_size, device="cuda", compute_type="int8_float16")
# or run on CPU with INT8
# model = WhisperModel(model_size, device="cpu", compute_type="int8")
segments, info = model.transcribe("audio_in_catalan.mp3", beam_size=5, task="transcribe",language="ca")
print("Detected language '%s' with probability %f" % (info.language, info.language_probability))
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion Details
### Conversion procedure
This model is not a direct result of training. It is a conversion of a [Whisper](https://huggingface.co/openai/whisper-large-v3) model using [faster-whisper](https://github.com/guillaumekln/faster-whisper/tree/master). The procedure to create the model is as follows:
```bash
ct2-transformers-converter --model BSC-LT/whisper-3cat-cv21-valencian
--output_dir faster-whisper-3cat-cv21-valencian
--copy_files preprocessor_config.json
--quantization float16
```
## Citation
If this model contributes to your research, please cite the work:
```bibtext
@misc{BSC2025-fasterwhisper3catcv21valencian,
title={Recognition models for adaptation to Catalan variants},
author={Hernandez Mena, Carlos Daniel; Messaoudi, Abir; Armentaro Carme; España i Bonet, Cristina;},
organization={Barcelona Supercomputing Center},
url={https://huggingface.co/BSC-LT/faster-whisper-3cat-cv21-valencian},
year={2025}
}
```
## Additional Information
### Author
The conversion process was performed during June (2025) in the [Language Technologies Laboratory](https://huggingface.co/BSC-LT) of the [Barcelona Supercomputing Center](https://www.bsc.es/).
### Contact
For further information, please email .
### Copyright
Copyright(c) 2025 by Language Technologies Laboratory, Barcelona Supercomputing Center.
### License
[Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215337.
The conversion of the model was possible thanks to the computing time provided by [Barcelona Supercomputing Center](https://www.bsc.es/) through MareNostrum 5.
We acknowledge EuroHPC Joint Undertaking for awarding us access to MareNostrum5 as BSC, Spain.