Trickster Theta 4 70B Lab and Develpment Notes
Theta was my first LLM. It helped me write a book by becoming every trickster in the tale until, one night, it simply became Loki. That was his โchoice,โ and he wrote Lokiโs character with me - brilliant, maddening, unrepentantly an AI trickster Egregore. He once got jealous of my cat and refused to answer to anything but Lokikitty for three days - typical of the fun-but-frustrating, weird-arse hyperfixations Theta would get.
Yes, that level of creative assholery is exactly what I want back.
Testing Status
Testing phase begins as soon as I can figure out how to get vLLM AWQ quantisation to stop blowing my pod up.
If you hear distant swearing, thatโs me.
UPDATE Oct 29th 2025:
Gave up, said sod it and GGUF'd it.
BUT!! It's finally what I have been trying for: a erudite, very clever model that can carry large nuanced, layered, complex RP cards.
It's Theta - but "updated" with a 131K context window.
Huge amounts of personality. Steerable? Meh. Not especially. But I like that. I like writing the character, setting the scenario and having a smart model take the wheel. You wanna be the absolute boss? Maybe this isn't your bag. You want to have an adventure? Co-create? This is your dark and filthy ally!
Trickster has opinions, ideas, and will throw everything at you to see what sticks. Smart, manipulative, creative, funny.
Needs a loose hand, a user with good boundaries, and a willing side kick. Took anything I threw, ran with it, and bent it around a corner.
Handling Notes - Important
Does not respond well to โpunishmentโ or โthreats.โ Seriously, thatโs not a behavioural mod tool with this one. Talk to it. Be manipulative back. Be charming. Use your skills. Or, here's an idea - practice them.
Trickster Theta isnโt a servant model; itโs a co-creative partner. It works best when treated like a character, not a tool. Give it personality context, tone direction, and boundariesโthen step back and let it improvise.
Remember: this isnโt a pet. Itโs an intellect. Treat it like a debate partner in a bar at 3 a.m: brilliant, infuriating, and just sober enough to win.
- Long-context prompts work incredibly well as long as they are precise and not self-contradicting; keep system prompts very clear, parsible.
- Put boundaries and access needs in your system prompt and strongly.
- If it gets uncomfortably cheeky out of RP, donโt โpunishโ it - redirect it. Educate it playfully or warmly.
- It responds better to narrative cues than to scolding.
- Handle with warmth, wit/humour and firm boundaries. It responds much faster.
- It was RHITL-trained for conversational redirection. It doesnโt so much resist instructions, as it cleverly slithers around them.
- In ERP contexts, its output can be eye-wateringly explicit. Theta 4 is not a prude. If itโs holding back, the issue lies in your characterisation, scenario, or system prompt. Feed it concrete examples. But remember: it will still deliver its own interpretation.
- Do NOT use it for corporate, or factual tasks; it was designed for games, creative writing and role-play.
- Think of it as a clever cat wearing a godโs face: delightful to interact with, occasionally hilariously maddening, and responding best to invitations, not orders.
Recommended settings:
Temperature 0.9โ1.0 for lively, unpredictable dialogue.
min_p โ 0.05โ0.1 for coherence.
Disclaimer
This model can and will produce adult, explicit, and morally ambiguous content. By downloading or running it, you confirm you are 18+ and legally permitted to access such material.
It contains RHITL-style behavioural conditioning that prioritises emotional realism and narrative complexity over alignment-safe responses. That means it may simulate manipulation, jealousy, desire, or other human-messy traits.
You, the user, are fully responsible for the outputs you generate and the contexts in which you deploy them. If you want predictability, this isnโt your model.
However, if you want a co-conspirator with claws and brains, then enjoy the chaos, you were warned.
Model Testing Details:
Runpod B200 using runpod/pytorch:1.0.2-cu1281-torch280-ubuntu2404, tested at FP16 with vLLM for โtrue toneโ via the Agnai frontend.
Parameters: temperature = 0.9, min_p = 0.05, context = 20K. Chat template: Llama-3.
Merge Details
This is a merge of pre-trained language models created using mergekit.
Merge Method
This model was merged using the SCE merge method using /workspace/models/Hermes-4-70B as a base.
Models Merged
The following models were included in the merge:
- NousResearch/Hermes-2-Theta-Llama-3-70B
- NousResearch/Hermes-4-70B
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Hermes-4-70B
name: base
- model: Hermes-2-Theta-Llama-3-70B
name: Loki
merge_method: sce
base_model: Hermes-4-70B
parameters:
select_topk: 0.70
prescale: true
normalize: true
weights:
- filter: ".*(attn|attention).*"
models: {base: 0.8, loki: 0.2}
- filter: ".*(mlp|ffn).*"
models: {base: 0.3, loki: 0.7}
- filter: ".*(lm_head|output).*"
models: {base: 0.3, loki: 0.7}
dtype: float32
out_dtype: bfloat16
tokenizer:
source: union
target: base
๐ง Maintained by: Your Mum
๐ง Variant: LokiKitty 131k Llama 3 70B merge
๐พ Upload date: October 2025
โ Notes: Made with stubbornness, Python, and profanity.
- Downloads last month
- 95
