Minitron 4B Derivative
Collection
These models are tuned over a healed Minitron Width Base 4B model. These models should perform near the level of Llama 2 7B for RP.
•
10 items
•
Updated
•
5
This is a merge of pre-trained language models created using mergekit.
This model was merged using the task arithmetic merge method using FourOhFour/Zenith_4B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: task_arithmetic
base_model: FourOhFour/Zenith_4B
parameters:
normalize: true
models:
- model: FourOhFour/Deedlit_4B
parameters:
weight: 0.3
- model: FourOhFour/NeuroCom_4B
parameters:
weight: 0.1
- model: FourOhFour/NeuroCom_v2_4B
parameters:
weight: 0.1
- model: FourOhFour/Zenith_4B
parameters:
weight: 0.3
- model: FourOhFour/QuantuMinx_4B
parameters:
weight: 0.1
- model: FourOhFour/Luxe_4B
parameters:
weight: 0.2
- model: FourOhFour/Maelstrom_4B
parameters:
weight: 0.1
- model: FourOhFour/Poe_4B
parameters:
weight: 0.1
dtype: bfloat16