MiMe-MeMo/Corpus-v1.1
Viewer • Updated • 13M • 165 • 2
How to use MiMe-MeMo/MeMo-BERT-01 with Transformers:
# Load model directly
from transformers import AutoTokenizer, AutoModelForPreTraining
tokenizer = AutoTokenizer.from_pretrained("MiMe-MeMo/MeMo-BERT-01")
model = AutoModelForPreTraining.from_pretrained("MiMe-MeMo/MeMo-BERT-01")MeMo-BERT-01 is a pre-trained language model for historical Danish and Norwegian literary texts (1870–1900).
It was introduced in Al-Laith et al. (2024) as part of the first dedicated PLMs for historical Danish and Norwegian.
This model represents the baseline historical-domain model trained entirely on 19th-century Scandinavian novels.
Primary tasks:
Intended users:
Not intended for:
| Task | Dataset | Test F1 | Notes |
|---|---|---|---|
| Sentiment Analysis | MiMe-MeMo/Sentiment-v1 | 0.56 | 3-class (pos/neg/neu) |
| Word Sense Disambiguation | MiMe-MeMo/WSD-Skaebne | 0.43 | 4-class (pre-modern, modern, figure of speech, ambiguous) |
MeMo-BERT-01 performs worse than MeMo-BERT-03 (continued pre-training), highlighting the limitations of training from scratch on historical data without leveraging contemporary PLMs.
If you use this model, please cite:
@inproceedings{al-laith-etal-2024-development,
title = "Development and Evaluation of Pre-trained Language Models for Historical {D}anish and {N}orwegian Literary Texts",
author = "Al-Laith, Ali and Conroy, Alexander and Bjerring-Hansen, Jens and Hershcovich, Daniel",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
pages = "4811--4819",
url = "https://aclanthology.org/2024.lrec-main.431/"
}