MultiDiac / README.md
herwoww's picture
Update README.md
9cb4e0a verified
metadata
configs:
  - config_name: yor
    data_files:
      - split: train
        path: yor_train.csv
      - split: dev
        path: yor_dev.csv

Are LLMs Good Text Diacritizers? An Arabic and Yorùbá Case Study

Hawau Olamide Toyin, Samar Magdy, Hanan Aldarmaki

We investigate the effectiveness of large language models (LLMs) for text diacritization in two typologically distinct languages: Arabic and Yoruba. To enable a rigorous evaluation, we introduce a novel multilingual dataset MultiDiac , with diverse samples that capture a range of diacritic ambiguities. We evaluate 14 LLMs varying in size, accessibility, and language coverage, and benchmarked them against 6 specialized diacritization models. Additionally, we fine-tune four small open-source models using LoRA for Yoruba. Our results show that many off-the-shelf LLMs outperform specialized diacritiztion models for both Arabic and Yoruba, but smaller models suffer from hallucinations. Fine-tuning on a small dataset can help improve diacritization performance and reduce hallucination rates.

Cite this work:

@misc{toyin2025llmsgoodtextdiacritizers,
     title={Are LLMs Good Text Diacritizers? An Arabic and Yor\`ub\'a Case Study}, 
     author={Hawau Olamide Toyin and Samar M. Magdy and Hanan Aldarmaki},
     year={2025},
     eprint={2506.11602},
     archivePrefix={arXiv},
     primaryClass={cs.CL},
     url={https://arxiv.org/abs/2506.11602}, 
}