gpt4-small-jetson-sft-helpsteer+no_robots_78k
Instruction-tuned checkpoint of the small GPT4-style model (NOT the actual implementation GPT-4 yet) after 1200 SFT steps on the HuggingFaceH4/no_robots conversational dataset. This is a lightweight research model; expect modest capabilities and occasional incoherence (the base model scores ~29% on MMLU).
What’s inside
- Architecture: decoder-only GPT variant (custom
gpt4devimplementation; requirestrust_remote_code=True). - Training: SFT on
no_robotswith Harmony-style chat formatting, assistant-only loss masking, cosine LR schedule, AdamW. - Special tokens: Harmony control tokens like
<|start|>assistant<|channel|>final<|message|>and<|end|>are included intokenizer_config.json/special_tokens_map.json.
Usage (Transformers)
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "k050506koch/GPT4-dev-177M-1511-Instruct"
tokenizer = AutoTokenizer.from_pretrained("k050506koch/GPT4-dev-177M-1511-Instruct")
model = AutoModelForCausalLM.from_pretrained("k050506koch/GPT4-dev-177M-1511-Instruct", trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
messages = [
{"role": "user", "content": "Write a short welcome message for new contributors."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
)
output = model.generate(**inputs, max_new_tokens=128, temperature=0.7, top_p=0.9)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Eval (quick pass)
Evaluated with python3.13 eval_sft.py on mps, limited sample sizes.
- no_robots (held-out): loss 2.9319, ppl 18.76
- HellaSwag (500 q): acc 0.3420, ppl 15.20
- MMLU (subset avg): acc 0.2575, ppl 36.34
- abstract_algebra: acc 0.2600, ppl 30.35 (n=100)
- college_biology: acc 0.2500, ppl 12.16 (n=144)
- us_foreign_policy: acc 0.2900, ppl 35.17 (n=100)
- moral_scenarios: acc 0.2300, ppl 67.68 (n=200)
To rerun locally:
python3.13 eval_sft.py --model-path k050506koch/GPT4-dev-177M-1511-Instruct \
--hellaswag-max-examples 500 --mmlu-max-examples 200 \
--mmlu-tasks abstract_algebra college_biology us_foreign_policy moral_scenarios
Limitations
- Small model; expect failures on reasoning, math, and factual precision.
- SFT data is crowd-sourced; outputs may reflect dataset biases.
- Does not implement safety filters—apply external guardrails for production.
License
MIT
- Downloads last month
- 1
Model tree for k050506koch/GPT4-dev-177M-1511-Instruct
Base model
k050506koch/GPT4-dev-177M-1511