auto-patch README.md
Browse files
README.md
CHANGED
|
@@ -1,7 +1,7 @@
|
|
| 1 |
---
|
| 2 |
base_model: Labagaite/mistral-Summarizer-7b-instruct-v0.2
|
| 3 |
language:
|
| 4 |
-
-
|
| 5 |
library_name: transformers
|
| 6 |
license: apache-2.0
|
| 7 |
quantized_by: mradermacher
|
|
@@ -50,7 +50,6 @@ more details, including on how to concatenate multi-part files.
|
|
| 50 |
| [GGUF](https://huggingface.co/mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF/resolve/main/mistral-Summarizer-7b-instruct-v0.2.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
|
| 51 |
| [GGUF](https://huggingface.co/mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF/resolve/main/mistral-Summarizer-7b-instruct-v0.2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
|
| 52 |
|
| 53 |
-
|
| 54 |
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
| 55 |
types (lower is better):
|
| 56 |
|
|
|
|
| 1 |
---
|
| 2 |
base_model: Labagaite/mistral-Summarizer-7b-instruct-v0.2
|
| 3 |
language:
|
| 4 |
+
- fr
|
| 5 |
library_name: transformers
|
| 6 |
license: apache-2.0
|
| 7 |
quantized_by: mradermacher
|
|
|
|
| 50 |
| [GGUF](https://huggingface.co/mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF/resolve/main/mistral-Summarizer-7b-instruct-v0.2.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
|
| 51 |
| [GGUF](https://huggingface.co/mradermacher/mistral-Summarizer-7b-instruct-v0.2-GGUF/resolve/main/mistral-Summarizer-7b-instruct-v0.2.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
|
| 52 |
|
|
|
|
| 53 |
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
| 54 |
types (lower is better):
|
| 55 |
|