Health History OpenLLaMA7Bv2-en-ft

The HealthHistoryOpenLLaMA7Bv2-en-ft was fine-tuned on the pre-trained model open_llama_7b_v2 and with patient data from health insurances organized in the form of historical sentences. The initial objective of the training was to predict hospitalizations, however, due to the possibility of applications in other tasks, we made these models available to the scientific community. This model was trained with English data translated from Portuguese Health Insurance Data. There are also other training approaches that can be seen at:

Other pre-trained models

Other Models trained to predict hospitalizations (fine-tune)

Fine-tune Data

The model was fine-tuned from 167,431 historical sentences from health insurance patients generated using the approach described in this paper Predicting Hospitalization from Health Insurance Data.

Model Fine-tune

Fine-tune Procedures

The model was fine-tuned on a GeForce NVIDIA RTX A5000 24GB GPU from laboratories of IT departament at UFPR (Federal University of Paraná). Fine-tuning was done using LoRA (Low Rank Adaptation), a parameter-efficient fine-tuning (PEFT) that minimizes computational and memory overhead, making deployments in resource-constrained scenarios more practical, while preserving the knowledge of the pre-trained model.

Fine-tune Hyperparameters

We use a batch size of 8, a maximum sequence length of 1024 tokens, accumulation steps of 8, number of epochs = 1 and a learning rate of 5.10−5 to fine-tune this model.

For the Lora hyperparameters use lora_r = 8, lora_alpha=32 and lora_dropout=0.1.

Fine-tune time

The training time was 79 hours 26 minutes per epoch.

Time to predict

Time to predict the first 500 sentences of dataset data_test_seed_en_12.csv: 281.09 seconds

Time to predict the first 500 sentences + data tokenization of data_test_seed_en_12.csv: 400.65 seconds

Predictions made with the maximum sentence length allowed by the models and with quantization nf4 of 4bit.

How to use the model

Load the model via the transformers library:

from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("efbaro/HealthHistoryOpenLLaMA7Bv2-en-ft")
model = AutoModel.from_pretrained("efbaro/HealthHistoryOpenLLaMA7Bv2-en-ft")

More Information

Refer to the original paper, Predicting Hospitalization with LLMs from Health Insurance Data

Refert to another article related to this research, Predicting Hospitalization from Health Insurance Data

Questions?

Email:

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support