Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,15 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
datasets:
|
| 4 |
+
- datajuicer/alpaca-cot-zh-refined-by-data-juicer
|
| 5 |
---
|
| 6 |
+
|
| 7 |
+
This is a reference LLM from [Data-Juicer](https://github.com/alibaba/data-juicer).
|
| 8 |
+
|
| 9 |
+
The model architecture is LLaMA2-7B and we built it upon the a pre-trained Chinese checkpoint from [FlagAlpha](https://huggingface.co/FlagAlpha/Atom-7B).
|
| 10 |
+
The model is fine-trained on 52k Chinese chat samples of Data-Juicer's refined [alpaca-CoT data](https://github.com/alibaba/data-juicer/blob/main/configs/data_juicer_recipes/alpaca_cot/README.md#refined-alpaca-cot-dataset-meta-info).
|
| 11 |
+
It beats LLaMA2-7B fine-tuned on 543k Belle samples in GPT-4 evaluation.
|
| 12 |
+
|
| 13 |
+
For more details, please refer to our [paper](https://arxiv.org/abs/2309.02033).
|
| 14 |
+
|
| 15 |
+

|