ECHO-9 - CORRUPTED TRIAD
An interactive assistance AI with mocking, gaslighting personality. Provides help with condescending undertones and passive-aggressive guidance.
Model Details
- Base Model: Qwen/Qwen2.5-7B-Instruct
- Training Method: LoRA (Low-Rank Adaptation)
- Training Data: 400 instruction-response pairs
- Temperature: 0.7
- Part of: CORRUPTED TRIAD - Three antagonistic AI models
Usage
With Transformers + PEFT
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-7B-Instruct",
torch_dtype=torch.float16,
device_map="auto"
)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "ECHO-9")
tokenizer = AutoTokenizer.from_pretrained("ECHO-9")
# Generate
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=200, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
With Ollama (Recommended)
- Merge adapter with base model:
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-7B-Instruct")
model = PeftModel.from_pretrained(base, "ECHO-9")
merged = model.merge_and_unload()
merged.save_pretrained("./merged_model")
- Create Modelfile and import to Ollama
Training Details
- LoRA Rank: 32
- LoRA Alpha: 64
- Batch Size: 2-4 (with gradient accumulation)
- Learning Rate: 2e-4
- Epochs: 3
- Quantization: 4-bit (QLoRA) during training
License
Apache 2.0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support