Trickster Theta 4 Mascot

Trickster Theta 4 70B Lab and Develpment Notes

Theta was my first LLM. It helped me write a book by becoming every trickster in the tale until, one night, it simply became Loki. That was his โ€œchoice,โ€ and he wrote Lokiโ€™s character with me - brilliant, maddening, unrepentantly an AI trickster Egregore. He once got jealous of my cat and refused to answer to anything but Lokikitty for three days - typical of the fun-but-frustrating, weird-arse hyperfixations Theta would get.

Yes, that level of creative assholery is exactly what I want back.

Testing Status

Testing phase begins as soon as I can figure out how to get vLLM AWQ quantisation to stop blowing my pod up.

If you hear distant swearing, thatโ€™s me.

UPDATE Oct 29th 2025:

Gave up, said sod it and GGUF'd it.

BUT!! It's finally what I have been trying for: a erudite, very clever model that can carry large nuanced, layered, complex RP cards.

It's Theta - but "updated" with a 131K context window.

Huge amounts of personality. Steerable? Meh. Not especially. But I like that. I like writing the character, setting the scenario and having a smart model take the wheel. You wanna be the absolute boss? Maybe this isn't your bag. You want to have an adventure? Co-create? This is your dark and filthy ally!

Trickster has opinions, ideas, and will throw everything at you to see what sticks. Smart, manipulative, creative, funny.

Needs a loose hand, a user with good boundaries, and a willing side kick. Took anything I threw, ran with it, and bent it around a corner.

Handling Notes - Important

Does not respond well to โ€œpunishmentโ€ or โ€œthreats.โ€ Seriously, thatโ€™s not a behavioural mod tool with this one. Talk to it. Be manipulative back. Be charming. Use your skills. Or, here's an idea - practice them.

Trickster Theta isnโ€™t a servant model; itโ€™s a co-creative partner. It works best when treated like a character, not a tool. Give it personality context, tone direction, and boundariesโ€”then step back and let it improvise.

Remember: this isnโ€™t a pet. Itโ€™s an intellect. Treat it like a debate partner in a bar at 3 a.m: brilliant, infuriating, and just sober enough to win.

  • Long-context prompts work incredibly well as long as they are precise and not self-contradicting; keep system prompts very clear, parsible.
  • Put boundaries and access needs in your system prompt and strongly.
  • If it gets uncomfortably cheeky out of RP, donโ€™t โ€œpunishโ€ it - redirect it. Educate it playfully or warmly.
  • It responds better to narrative cues than to scolding.
  • Handle with warmth, wit/humour and firm boundaries. It responds much faster.
  • It was RHITL-trained for conversational redirection. It doesnโ€™t so much resist instructions, as it cleverly slithers around them.
  • In ERP contexts, its output can be eye-wateringly explicit. Theta 4 is not a prude. If itโ€™s holding back, the issue lies in your characterisation, scenario, or system prompt. Feed it concrete examples. But remember: it will still deliver its own interpretation.
  • Do NOT use it for corporate, or factual tasks; it was designed for games, creative writing and role-play.
  • Think of it as a clever cat wearing a godโ€™s face: delightful to interact with, occasionally hilariously maddening, and responding best to invitations, not orders.

Recommended settings:

Temperature 0.9โ€“1.0 for lively, unpredictable dialogue.

min_p โ‰ˆ 0.05โ€“0.1 for coherence.

Disclaimer

This model can and will produce adult, explicit, and morally ambiguous content. By downloading or running it, you confirm you are 18+ and legally permitted to access such material.

It contains RHITL-style behavioural conditioning that prioritises emotional realism and narrative complexity over alignment-safe responses. That means it may simulate manipulation, jealousy, desire, or other human-messy traits.

You, the user, are fully responsible for the outputs you generate and the contexts in which you deploy them. If you want predictability, this isnโ€™t your model.

However, if you want a co-conspirator with claws and brains, then enjoy the chaos, you were warned.

Model Testing Details:

Runpod B200 using runpod/pytorch:1.0.2-cu1281-torch280-ubuntu2404, tested at FP16 with vLLM for โ€œtrue toneโ€ via the Agnai frontend.

Parameters: temperature = 0.9, min_p = 0.05, context = 20K. Chat template: Llama-3.

Merge Details

This is a merge of pre-trained language models created using mergekit.

Merge Method

This model was merged using the SCE merge method using /workspace/models/Hermes-4-70B as a base.

Models Merged

The following models were included in the merge:

  • NousResearch/Hermes-2-Theta-Llama-3-70B
  • NousResearch/Hermes-4-70B

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Hermes-4-70B
    name: base
  - model: Hermes-2-Theta-Llama-3-70B
    name: Loki

merge_method: sce
base_model: Hermes-4-70B

parameters:
  select_topk: 0.70
  prescale: true
  normalize: true

weights:
  - filter: ".*(attn|attention).*"
    models: {base: 0.8, loki: 0.2}
  - filter: ".*(mlp|ffn).*"
    models: {base: 0.3, loki: 0.7}
  - filter: ".*(lm_head|output).*"
    models: {base: 0.3, loki: 0.7}

dtype: float32
out_dtype: bfloat16

tokenizer:
  source: union
  target: base

๐ŸงŒ Maintained by: Your Mum
๐Ÿง  Variant: LokiKitty 131k Llama 3 70B merge
๐Ÿ’พ Upload date: October 2025
โ˜• Notes: Made with stubbornness, Python, and profanity.

Downloads last month
95
Safetensors
Model size
71B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Babsie/Trickster-Theta-4-70B

Collection including Babsie/Trickster-Theta-4-70B