Health History RoBERTa-en-ft
The HealthHistoryRoBERTa-en-ft was fine-tuned on the pre-trained model efbaro/HealthHistoryRoBERTa-en and with patient data from health insurances organized in the form of historical sentences. The initial objective of the training was to predict hospitalizations, however, due to the possibility of applications in other tasks, we made these models available to the scientific community. This model was trained with English data translated from Portuguese Health Insurance Data. There are also other training approaches that can be seen at:
Other pre-trained models
- HealthHistoryRoBERTa-en
- HealthHistoryRoBERTa-pt
- HealthHistoryBERT-en
- HealthHistoryBioBERT-en
- HealthHistoryBio_ClinicalBERT-en
- HealthHistoryBERTimbau-pt
Other Models trained to predict hospitalizations (fine-tune)
- HealthHistoryOpenLLaMA3Bv2-en-ft
- HealthHistoryOpenLLaMA7Bv2-en-ft
- HealthHistoryOpenLLaMA13B-en-ft
- HealthHistoryOpenCabrita3B-pt-ft
- HealthHistoryRoBERTa-en-ft
- HealthHistoryRoBERTa-pt-ft
- HealthHistoryBERTimbau-pt-ft
- HealthHistoryBERT-en-ft
- HealthHistoryBioBERT-en-ft
- HealthHistoryBio_ClinicalBERT-en-ft
Fine-tune Data
The model was fine-tuned from 83,715 historical sentences from health insurance patients generated using the approach described in this paper Predicting Hospitalization from Health Insurance Data.
Model Fine-tune
Fine-tune Procedures
The model was fine-tuned on a GeForce NVIDIA RTX A5000 24GB GPU from laboratories of IT departament at UFPR (Federal University of Paraná).
Fine-tune Hyperparameters
We use a batch size of 64, a maximum sequence length of 512 tokens, accumulation steps of 16, number of epochs = 2 and a learning rate of 5.10−5 to fine-tune this model.
Fine-tune time
The training time was 2 hours 52 minutes per epoch.
Time to predict
Time to predict the first 500 sentences of dataset data_test_seed_en_12.csv: 1.35 seconds.
Time to predict the first 500 sentences + data tokenization of data_test_seed_en_12.csv: 3.46 seconds.
Predictions made with the maximum sentence length allowed by the models.
How to use the model
Load the model via the transformers library:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("efbaro/HealthHistoryRoBERTa-en-ft")
model = AutoModel.from_pretrained("efbaro/HealthHistoryRoBERTa-en-ft")
More Information
Refer to the original paper, Predicting Hospitalization with LLMs from Health Insurance Data
Refert to another article related to this research, Predicting Hospitalization from Health Insurance Data
Questions?
Email:
- Everton F. Baro: [email protected], [email protected]
- Luiz S. Oliveira: [email protected]
- Alceu de Souza Britto Junior: [email protected]
- Downloads last month
- 1