--- license: cc-by-nc-nd-4.0 language: - fa extra_gated_description: >- You agree to not use the dataset to conduct experiments that cause harm to human subjects. extra_gated_fields: Full Name: text Organization (University): text Email address: text Country: country Could you briefly explain the purpose of using dataset?: text I agree to use this dataset for non-commercial use ONLY: checkbox task_categories: - text-generation tags: - text - corpus --- # Matina: A Large-Scale 73B Token Persian Text Corpus Text corpora are essential for training models used in tasks like summarization, translation, and large language models (LLMs). While various efforts have been made to collect monolingual and multilingual datasets in many languages, Persian has often been underrepresented due to limited resources for data collection and preprocessing. Existing Persian datasets are typically small and lack content diversity, consisting mainly of weblogs and news articles. This shortage of high-quality, varied data has slowed the development of NLP models and open-source LLMs for Persian. Since model performance depends heavily on the quality of training data, we address this gap by introducing the Matina corpus, a new Persian dataset of 72.9B tokens, carefully preprocessed and deduplicated to ensure high data quality. We further assess its effectiveness by training and evaluating transformer-based models on key NLP tasks. Both the dataset and preprocessing codes are publicly available, enabling researchers to build on and improve this resource for future Persian NLP advancements. ### Dataset Sources - **Paper:** Matina: A Large-Scale 73B Token Persian Text Corpus ([Accepted in NAACL 2025](https://aclanthology.org/2025.naacl-long.462/)) ## Citation **BibTeX:** ``` @inproceedings{hosseinbeigi-etal-2025-matina, title = "Matina: A Large-Scale 73{B} Token {P}ersian Text Corpus", author = "Hosseinbeigi, Sara Bourbour and Taherinezhad, Fatemeh and Faili, Heshaam and Baghbani, Hamed and Nadi, Fatemeh and Amiri, Mostafa", editor = "Chiruzzo, Luis and Ritter, Alan and Wang, Lu", booktitle = "Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = apr, year = "2025", address = "Albuquerque, New Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.naacl-long.462/", doi = "10.18653/v1/2025.naacl-long.462", pages = "9143--9157", ISBN = "979-8-89176-189-6", abstract = "Text corpora are essential for training models used in tasks like summarization, translation, and large language models (LLMs). While various efforts have been made to collect monolingual and multilingual datasets in many languages, Persian has often been underrepresented due to limited resources for data collection and preprocessing. Existing Persian datasets are typically small and lack content diversity, consisting mainly of weblogs and news articles. This shortage of high-quality, varied data has slowed the development of NLP models and open-source LLMs for Persian. Since model performance depends heavily on the quality of training data, we address this gap by introducing the Matina corpus, a new Persian dataset of 72.9B tokens, carefully preprocessed and deduplicated to ensure high data quality. We further assess its effectiveness by training and evaluating transformer-based models on key NLP tasks. Both the dataset and preprocessing codes are publicly available, enabling researchers to build on and improve this resource for future Persian NLP advancements." } ```