Adding Evaluation Results
#9
by
leaderboard-pr-bot
- opened
README.md
CHANGED
|
@@ -1,5 +1,7 @@
|
|
| 1 |
---
|
| 2 |
-
|
|
|
|
|
|
|
| 3 |
tags:
|
| 4 |
- SOLAR
|
| 5 |
- instruct
|
|
@@ -8,14 +10,12 @@ tags:
|
|
| 8 |
- gpt4
|
| 9 |
- synthetic data
|
| 10 |
- distillation
|
|
|
|
|
|
|
|
|
|
| 11 |
model-index:
|
| 12 |
- name: Nous-Hermes-2-SOLAR-10.7B
|
| 13 |
results: []
|
| 14 |
-
license: apache-2.0
|
| 15 |
-
language:
|
| 16 |
-
- en
|
| 17 |
-
datasets:
|
| 18 |
-
- teknium/OpenHermes-2.5
|
| 19 |
---
|
| 20 |
|
| 21 |
# Nous Hermes 2 - Solar 10.7B
|
|
@@ -206,3 +206,17 @@ In LM-Studio, simply select the ChatML Prefix on the settings side pane:
|
|
| 206 |
GGUF: https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GGUF
|
| 207 |
|
| 208 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
license: apache-2.0
|
| 5 |
tags:
|
| 6 |
- SOLAR
|
| 7 |
- instruct
|
|
|
|
| 10 |
- gpt4
|
| 11 |
- synthetic data
|
| 12 |
- distillation
|
| 13 |
+
datasets:
|
| 14 |
+
- teknium/OpenHermes-2.5
|
| 15 |
+
base_model: upstage/SOLAR-10.7B-v1.0
|
| 16 |
model-index:
|
| 17 |
- name: Nous-Hermes-2-SOLAR-10.7B
|
| 18 |
results: []
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
---
|
| 20 |
|
| 21 |
# Nous Hermes 2 - Solar 10.7B
|
|
|
|
| 206 |
GGUF: https://huggingface.co/TheBloke/Nous-Hermes-2-SOLAR-10.7B-GGUF
|
| 207 |
|
| 208 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
| 209 |
+
|
| 210 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
| 211 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NousResearch__Nous-Hermes-2-SOLAR-10.7B)
|
| 212 |
+
|
| 213 |
+
| Metric |Value|
|
| 214 |
+
|---------------------------------|----:|
|
| 215 |
+
|Avg. |71.00|
|
| 216 |
+
|AI2 Reasoning Challenge (25-Shot)|66.72|
|
| 217 |
+
|HellaSwag (10-Shot) |84.89|
|
| 218 |
+
|MMLU (5-Shot) |66.30|
|
| 219 |
+
|TruthfulQA (0-shot) |55.82|
|
| 220 |
+
|Winogrande (5-shot) |82.79|
|
| 221 |
+
|GSM8k (5-shot) |69.45|
|
| 222 |
+
|