Datasets:
Tasks:
Fill-Mask
Formats:
csv
Sub-tasks:
masked-language-modeling
Size:
1M - 10M
ArXiv:
Tags:
afrolm
active learning
language modeling
research papers
natural language processing
self-active learning
License:
Commit
·
b25c891
1
Parent(s):
3e874e5
Update README.md
Browse files
README.md
CHANGED
|
@@ -78,9 +78,9 @@ Model | MasakhaNER | MasakhaNER2.0* | Text Classification (Yoruba/Hausa) | Senti
|
|
| 78 |
|
| 79 |
## HuggingFace usage of AfroLM-large
|
| 80 |
```python
|
| 81 |
-
from transformers import
|
| 82 |
-
model =
|
| 83 |
-
tokenizer =
|
| 84 |
tokenizer.model_max_length = 256
|
| 85 |
```
|
| 86 |
|
|
|
|
| 78 |
|
| 79 |
## HuggingFace usage of AfroLM-large
|
| 80 |
```python
|
| 81 |
+
from transformers import XLMRobertaModel, XLMRobertaTokenizer
|
| 82 |
+
model = XLMRobertaModel.from_pretrained("bonadossou/afrolm_active_learning")
|
| 83 |
+
tokenizer = XLMRobertaTokenizer.from_pretrained("bonadossou/afrolm_active_learning")
|
| 84 |
tokenizer.model_max_length = 256
|
| 85 |
```
|
| 86 |
|