Update README.md
#1
by
NeuralNovel
- opened
README.md
CHANGED
|
@@ -8,9 +8,10 @@ tags:
|
|
| 8 |
- merge
|
| 9 |
---
|
| 10 |
# CognitiveFusion2-4x7B-BF16
|
| 11 |
-

|
| 12 |
|
| 13 |
-
|
|
|
|
|
|
|
| 14 |
|
| 15 |
[GGUF FILES](https://huggingface.co/Kquant03/CognitiveFusion2-4x7B-GGUF)
|
| 16 |
|
|
@@ -18,6 +19,8 @@ tags:
|
|
| 18 |
|
| 19 |
This is an update to the original [Cognitive Fusion](https://huggingface.co/Kquant03/CognitiveFusion-4x7B-bf16-MoE). We intend to perform a fine-tune on it in order to increase its performance.
|
| 20 |
|
|
|
|
|
|
|
| 21 |
- [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B) - base
|
| 22 |
- [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B) - expert #1
|
| 23 |
- [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b) - expert #2
|
|
|
|
| 8 |
- merge
|
| 9 |
---
|
| 10 |
# CognitiveFusion2-4x7B-BF16
|
|
|
|
| 11 |
|
| 12 |
+

|
| 13 |
+
|
| 14 |
+
# Back and better than ever.
|
| 15 |
|
| 16 |
[GGUF FILES](https://huggingface.co/Kquant03/CognitiveFusion2-4x7B-GGUF)
|
| 17 |
|
|
|
|
| 19 |
|
| 20 |
This is an update to the original [Cognitive Fusion](https://huggingface.co/Kquant03/CognitiveFusion-4x7B-bf16-MoE). We intend to perform a fine-tune on it in order to increase its performance.
|
| 21 |
|
| 22 |
+
Made cooperatively with [NeuralNovel](https://huggingface.co/NeuralNovel) 🤝
|
| 23 |
+
|
| 24 |
- [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B) - base
|
| 25 |
- [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B) - expert #1
|
| 26 |
- [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b) - expert #2
|