zerofata's picture
Update README.md
2d0164f verified
metadata
license: mit
datasets:
  - zerofata/Instruct-Anime
  - zerofata/Roleplay-Anime-Characters
  - zerofata/Instruct-Anime-CreativeWriting
  - zerofata/Summaries-Anime-FandomPages
base_model:
  - zai-org/GLM-4.5-Air

ICEBLINK

VERSION 2

image

Overview

Another re-attempt at GLM 4.5 Air. This time using a different training framework, some updated data and better hyperparameters.

This model is a creative writing and RP model. It's pretty verbose. The intent is to keep the behavior of the original model, but to improve writing, dialogue & creativity.

Compared to the original Iceblink, the effect on this one is more pronounced, with hopefully minimal impact on the intelligence.

SillyTavern Settings

Recommended Roleplay Format

> Actions: In plaintext
> Dialogue: "In quotes"
> Thoughts: *In asterisks*

Recommended Samplers

> Temp: 0.8 - 0.9
> MinP: 0.05
> TopP: 0.95 - 1.00

Instruct

GLM4.5 (no thinking): SillyTavern Preset

Quantizations

Creation Process

Creation Process: SFT

SFT on approx 13 million tokens, SFW / NSFW RP, stories, creative instruct & chat data. Some of the SFW datasets are public and can be found in the model datasets list.

I've switched over from Axolotl to MS-Swift w/ Megatron to train MoE models now. There's a roughly 5-10x speedup in training the models, thanks to escaping the naive MoE implementation in TRL. The training time for this run took only 40 minutes, excluding environment setup time.

A low LR for GLM Air appears to be king. Going any higher, I've found it extremely easy to begin overcooking the model.

Special Thanks

A shoutout to the people in BeaverAI discord that helped me test this model and my intermediate versions.

ddh0 (Madison), Ambius, Dysfunctional & my dude.