Update README.md
Browse files
README.md
CHANGED
|
@@ -26,6 +26,43 @@ base_model: DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B
|
|
| 26 |
This model was converted to GGUF format from [`DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B`](https://huggingface.co/DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 27 |
Refer to the [original model card](https://huggingface.co/DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B) for more details on the model.
|
| 28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
## Use with llama.cpp
|
| 30 |
Install llama.cpp through brew (works on Mac and Linux)
|
| 31 |
|
|
|
|
| 26 |
This model was converted to GGUF format from [`DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B`](https://huggingface.co/DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 27 |
Refer to the [original model card](https://huggingface.co/DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B) for more details on the model.
|
| 28 |
|
| 29 |
+
---
|
| 30 |
+
Context : 1,000,000 tokens.
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
Required: Llama 3 Instruct template.
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
The Deep Hermes 8B Preview model (reasoning), [ https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview ]
|
| 37 |
+
converted to 1 million context using Nvidia's Ultra Long 1 million 8B Instruct model.
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
The goal of this model was to stablize long generation and long context "needle in a haystack" issues.
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
According to Nvidia there is both a bump in general performance, as well as perfect "recall" over the entire 1 million context.
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
[ https://huggingface.co/nvidia/Llama-3.1-8B-UltraLong-1M-Instruct ]
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
Additional changes:
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
Model appears to be de-censored / more de-censored.
|
| 53 |
+
Output generation is improved.
|
| 54 |
+
Creative output generation is vastly improved.
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
NOTE: Higher temps will result in deeper, richer "thoughts"... and frankly more interesting ones too.
|
| 58 |
+
|
| 59 |
+
|
| 60 |
+
The "thinking/reasoning" tech (for the model at this repo) is from the original Llama 3.1 "DeepHermes" model from NousResearch:
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
[ https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview ]
|
| 64 |
+
|
| 65 |
+
---
|
| 66 |
## Use with llama.cpp
|
| 67 |
Install llama.cpp through brew (works on Mac and Linux)
|
| 68 |
|