Update README.md
Browse files
README.md
CHANGED
|
@@ -13,9 +13,10 @@ This is an unofficial implementation of "[AlpaGasus: Training a better Alpaca wi
|
|
| 13 |
- **License**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
|
| 14 |
|
| 15 |
|
| 16 |
-
### Training
|
| 17 |
|
| 18 |
"StudentLLM/Alpagasus-2-13b-QLoRA-merged" used [gpt4life](https://github.com/gpt4life/alpagasus)'s gpt-3.5-turbo filtered dataset, 'alpaca_t45.json'.
|
|
|
|
| 19 |
Configuration of the dataset is as follows:
|
| 20 |
|
| 21 |
```
|
|
@@ -58,9 +59,34 @@ Our model was finetuned using QLoRA on single A100 80GB GPU. Training details ar
|
|
| 58 |
| TruthfulQA | 38.53 |
|
| 59 |
|
| 60 |
### LLM Evaluation
|
|
|
|
| 61 |
|
| 62 |
-
|
| 63 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 64 |
|
| 65 |
### Citations
|
| 66 |
```bibtex
|
|
|
|
| 13 |
- **License**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
|
| 14 |
|
| 15 |
|
| 16 |
+
### Training dataset
|
| 17 |
|
| 18 |
"StudentLLM/Alpagasus-2-13b-QLoRA-merged" used [gpt4life](https://github.com/gpt4life/alpagasus)'s gpt-3.5-turbo filtered dataset, 'alpaca_t45.json'.
|
| 19 |
+
|
| 20 |
Configuration of the dataset is as follows:
|
| 21 |
|
| 22 |
```
|
|
|
|
| 59 |
| TruthfulQA | 38.53 |
|
| 60 |
|
| 61 |
### LLM Evaluation
|
| 62 |
+
We tried to follow the evaluation metric introduced by the AlpaGasus paper. During the process, we consulted the code by [gpt4life](https://github.com/gpt4life/alpagasus). We used OpenAI's gpt-3.5-turbo as the evaluator model, and Alpaca2-LoRA-13B(it doesn't exist now...) as the comparison model. For more detailed information, please refer to our Github [repo](https://github.com/gauss5930/AlpaGasus2-QLoRA).
|
| 63 |
|
| 64 |
+
The evaluation result of AlpaGasus2-QLoRA is as follows:
|
| 65 |
+

|
| 66 |
+
|
| 67 |
+
### How to use
|
| 68 |
+
To use "StudentLLM/Alpagasus-2-13b-QLoRA-merged", please follow the following code! The use of the 7B model is the same!
|
| 69 |
+
```
|
| 70 |
+
from peft import PeftModel, PeftConfig
|
| 71 |
+
from transformers import AutoModelForCausalLM
|
| 72 |
+
import torch
|
| 73 |
+
|
| 74 |
+
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
| 75 |
+
|
| 76 |
+
config = PeftConfig.from_pretrained("StudentLLM/Alpagasus-2-13B-QLoRA")
|
| 77 |
+
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b-hf", use_auth_token="yotu_HuggingFace_token").to(device)
|
| 78 |
+
model = PeftModel.from_pretrained(model, "StudentLLM/Alpagasus-2-13B-QLoRA")
|
| 79 |
+
|
| 80 |
+
tokenizer = AutoTokenizer.from_pretrained("StudentLLM/Alpagasus-2-13B-QLoRA")
|
| 81 |
+
tokenizer.pad_token = tokenizer.eos_token
|
| 82 |
+
|
| 83 |
+
input_data = "Please tell me 3 ways to relieve stress." # You can enter any questions!!
|
| 84 |
+
|
| 85 |
+
model_inputs = tokenizer(input_data, return_tensors='pt').to(device)
|
| 86 |
+
model_output = model.generate(**model_inputs, max_length=256)
|
| 87 |
+
model_output = tokenizer.decode(model_output[0], skip_special_tokens=True)
|
| 88 |
+
print(model_output)
|
| 89 |
+
```
|
| 90 |
|
| 91 |
### Citations
|
| 92 |
```bibtex
|