Update README.md
Browse files
README.md
CHANGED
|
@@ -2,19 +2,13 @@
|
|
| 2 |
tags:
|
| 3 |
- merge
|
| 4 |
- mergekit
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
- Obrolin/Kesehatan-7B-v0.1
|
| 8 |
-
- FelixChao/WestSeverus-7B-DPO-v2
|
| 9 |
-
base_model:
|
| 10 |
-
- indischepartij/OpenMia-Indo-Mistral-7b-v2
|
| 11 |
-
- Obrolin/Kesehatan-7B-v0.1
|
| 12 |
-
- FelixChao/WestSeverus-7B-DPO-v2
|
| 13 |
---
|
| 14 |
|
| 15 |
# MiaLatte-Indo-Mistral-7b
|
| 16 |
|
| 17 |
-
MiaLatte-Indo-Mistral-7b is a merge of the following models using
|
| 18 |
* [indischepartij/OpenMia-Indo-Mistral-7b-v2](https://huggingface.co/indischepartij/OpenMia-Indo-Mistral-7b-v2)
|
| 19 |
* [Obrolin/Kesehatan-7B-v0.1](https://huggingface.co/Obrolin/Kesehatan-7B-v0.1)
|
| 20 |
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
|
|
@@ -53,7 +47,7 @@ import transformers
|
|
| 53 |
import torch
|
| 54 |
|
| 55 |
model = "indischepartij/MiaLatte-Indo-Mistral-7b"
|
| 56 |
-
messages = [{"role": "user", "content": "
|
| 57 |
|
| 58 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
| 59 |
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
|
|
|
| 2 |
tags:
|
| 3 |
- merge
|
| 4 |
- mergekit
|
| 5 |
+
|
| 6 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
---
|
| 8 |
|
| 9 |
# MiaLatte-Indo-Mistral-7b
|
| 10 |
|
| 11 |
+
MiaLatte-Indo-Mistral-7b is a merge of the following models using MergeKit:
|
| 12 |
* [indischepartij/OpenMia-Indo-Mistral-7b-v2](https://huggingface.co/indischepartij/OpenMia-Indo-Mistral-7b-v2)
|
| 13 |
* [Obrolin/Kesehatan-7B-v0.1](https://huggingface.co/Obrolin/Kesehatan-7B-v0.1)
|
| 14 |
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
|
|
|
|
| 47 |
import torch
|
| 48 |
|
| 49 |
model = "indischepartij/MiaLatte-Indo-Mistral-7b"
|
| 50 |
+
messages = [{"role": "user", "content": "Apa jenis skincare yang cocok untuk kulit berjerawat??"}]
|
| 51 |
|
| 52 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
| 53 |
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|