herwoww commited on
Commit
a1e4368
·
verified ·
1 Parent(s): c667ce8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -1
README.md CHANGED
@@ -21,4 +21,16 @@ configs:
21
  <center> <b> Hawau Olamide Toyin, Samar Magdy, Hanan Aldarmaki </b> </center>
22
 
23
  We investigate the effectiveness of large language models (LLMs) for text diacritization in two typologically distinct languages: Arabic and Yoruba. To enable a rigorous evaluation, we introduce a novel multilingual dataset <strong>MultiDiac</strong>
24
- , with diverse samples that capture a range of diacritic ambiguities. We evaluate 14 LLMs varying in size, accessibility, and language coverage, and benchmarked them against 6 specialized diacritization models. Additionally, we fine-tune four small open-source models using LoRA for Yoruba. Our results show that many off-the-shelf LLMs outperform specialized diacritiztion models for both Arabic and Yoruba, but smaller models suffer from hallucinations. Fine-tuning on a small dataset can help improve diacritization performance and reduce hallucination rates.
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  <center> <b> Hawau Olamide Toyin, Samar Magdy, Hanan Aldarmaki </b> </center>
22
 
23
  We investigate the effectiveness of large language models (LLMs) for text diacritization in two typologically distinct languages: Arabic and Yoruba. To enable a rigorous evaluation, we introduce a novel multilingual dataset <strong>MultiDiac</strong>
24
+ , with diverse samples that capture a range of diacritic ambiguities. We evaluate 14 LLMs varying in size, accessibility, and language coverage, and benchmarked them against 6 specialized diacritization models. Additionally, we fine-tune four small open-source models using LoRA for Yoruba. Our results show that many off-the-shelf LLMs outperform specialized diacritiztion models for both Arabic and Yoruba, but smaller models suffer from hallucinations. Fine-tuning on a small dataset can help improve diacritization performance and reduce hallucination rates.
25
+
26
+
27
+ #### Cite this work:
28
+ @misc{toyin2025llmsgoodtextdiacritizers,
29
+ title={Are LLMs Good Text Diacritizers? An Arabic and Yor\`ub\'a Case Study},
30
+ author={Hawau Olamide Toyin and Samar M. Magdy and Hanan Aldarmaki},
31
+ year={2025},
32
+ eprint={2506.11602},
33
+ archivePrefix={arXiv},
34
+ primaryClass={cs.CL},
35
+ url={https://arxiv.org/abs/2506.11602},
36
+ }