SnowStorm-v1.15-4x8B-A
Collection
v1.15A, vision, based on LumiMaidOAS
•
3 items
•
Updated
Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than Mixtral 8x7B and it's finetunes in RP/ERP tasks.
There's:
base_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS
gate_mode: random
dtype: bfloat16
experts_per_token: 2
experts:
- source_model: Nitral-AI_Poppy_Porpoise-1.0-L3-8B
- source_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS
- source_model: openlynn_Llama-3-Soliloquy-8B-v2
- source_model: Sao10K_L3-8B-Stheno-v3.1
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 67.68 |
| AI2 Reasoning Challenge (25-Shot) | 62.20 |
| HellaSwag (10-Shot) | 81.09 |
| MMLU (5-Shot) | 67.89 |
| TruthfulQA (0-shot) | 52.11 |
| Winogrande (5-shot) | 76.32 |
| GSM8k (5-shot) | 66.49 |