π KAIdol NER Multilingual Model
This is a multilingual NER (Named Entity Recognition) model developed as part of the KAIdol Project.
It is based on Davlan/xlm-roberta-base-ner-hrl, fine-tuned on the WikiAnn dataset for Korean (ko), English (en), Spanish (es), and Portuguese (pt).
π§ Model Details
- Base model:
Davlan/xlm-roberta-base-ner-hrl - NER Tags:
PER: PersonORG: OrganizationLOC: Location
- Tokenizer: AutoTokenizer from base model
- Max length: 128 tokens
π Training Configuration
| Parameter | Value |
|---|---|
| Epochs | 5 |
| Batch Size | 16 |
| Optimizer | AdamW |
| Learning Rate | 5e-5 |
| Loss | CrossEntropy with class weights |
| Dataset | WikiAnn (en, ko, es, pt) |
β Performance Summary
| Language | F1-macro | PER F1 | ORG F1 | LOC F1 |
|---|---|---|---|---|
| English | 0.74 | 0.84 | 0.63 | 0.76 |
| Korean | 0.43 | 0.46 | 0.30 | 0.52 |
| Spanish | TBD | TBD | TBD | TBD |
| Portuguese | TBD | TBD | TBD | TBD |
Performance on
esandptwill be updated after evaluation. Korean performance is limited due to tokenization issues in WikiAnn.
π Usage Example
from transformers import AutoTokenizer, AutoModelForTokenClassification
model = AutoModelForTokenClassification.from_pretrained("developer-lunark/kaidol-ner-multilingual")
tokenizer = AutoTokenizer.from_pretrained("developer-lunark/kaidol-ner-multilingual")
tokens = tokenizer("Barack Obama naciΓ³ en HawΓ‘i.", return_tensors="pt")
output = model(**tokens)
π§Ύ Label Mapping
{
'O': 0,
'B-PER': 1,
'I-PER': 2,
'B-ORG': 3,
'I-ORG': 4,
'B-LOC': 5,
'I-LOC': 6
}
π License
MIT License
π¬ Contact
Developed by the [KAIdol νλ‘μ νΈ ν].
For questions or collaborations, contact: developer-lunark
- Downloads last month
- 15
Model tree for developer-lunark/kaidol-ner-multilingual
Base model
Davlan/xlm-roberta-base-ner-hrl