π©΅ LunaMaid-12B
This is a multi-stage merge of pre-trained language models created using mergekit.
𧬠Merge Overview
LunaMaid-12B was produced through a two-stage multi-model merge using MergeKit.
Each stage fuses models with complementary linguistic and stylistic traits to create a cohesive, emotionally nuanced personality.
π©΅ Stage 1 β Slerp Merge (Intermediate Model First)
- Base Model: Vortex5/Vermilion-Sage-12B
- Merged With: yamatazen/NeonMaid-12B-v2
- Method: Spherical Linear Interpolation (Slerp)
Stage 1 Configuration
name: First
base_model: Vortex5/Vermilion-Sage-12B
models:
- model: yamatazen/NeonMaid-12B-v2
merge_method: slerp
dtype: bfloat16
parameters:
normalize: true
t: [0.25, 0.35, 0.45, 0.55, 0.65, 0.75, 0.6, 0.5, 0.6, 0.6]
π Merge Method β Karcher Mean Merge (Final Model)
- Base Model: Intermediate output from Stage 1
./intermediates/First - Merged With: Vortex5/Moonlit-Shadow-12B
- Method: Karcher Mean (Riemannian Barycenter)
Stage 2 Configuration
dtype: bfloat16
merge_method: karcher
modules:
default:
slices:
- sources:
- layer_range: [0, 40]
model: ./intermediates/First
- layer_range: [0, 40]
model: Vortex5/Moonlit-Shadow-12B
parameters:
max_iter: 9999
tol: 1e-9
Models Merged
The following models were included in the merge:
- Vortex5/Moonlit-Shadow-12B
- Vortex5/Vermilion-Sage-12B
- yamatazen/NeonMaid-12B-v2
- ./intermediates/First
- Downloads last month
- 241
Model tree for Vortex5/LunaMaid-12B
Merge model
this model
