question_answering_model
This model is a fine-tuned version of distilbert/distilbert-base-uncased on SQuAD. It achieves the following results on the evaluation set:
- Loss: 1.1407
Model description
Question and Answering model fine-tuned on SQuAD.
Intended uses & limitations
Educational demo of extractive QA with transformers. Not for production, medical, legal, or safety-critical use.
Citation Information
title = "{SQ}u{AD}: 100,000+ Questions for Machine Comprehension of Text",
author = "Rajpurkar, Pranav and
Zhang, Jian and
Lopyrev, Konstantin and
Liang, Percy",
editor = "Su, Jian and
Duh, Kevin and
Carreras, Xavier",
booktitle = "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2016",
address = "Austin, Texas",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D16-1264",
doi = "10.18653/v1/D16-1264",
pages = "2383--2392",
eprint={1606.05250},
archivePrefix={arXiv},
primaryClass={cs.CL},
}
## Training and evaluation data
Trained on [squad](https://huggingface.co/datasets/rajpurkar/squad) (train).
Evaluated on its validation split.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1995 | 1.0 | 5475 | 1.1494 |
| 0.9689 | 2.0 | 10950 | 1.0921 |
| 0.7334 | 3.0 | 16425 | 1.1407 |
### Framework versions
- Transformers 5.0.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.1
- Downloads last month
- 268
Model tree for ae-314/question_answering_model
Base model
distilbert/distilbert-base-uncased