Erotophobia-24B-v1.1

Model Banner

My first second merge and model ever! Literally depraved.

I made the configuration simpler and replaced the personality model. I think it fits nicely now and produce an actual working model. But since this is my first model, I kept thinking that it's very good. Don't get me wrong, it is "working as intended" but yeah, try it out for yourself!

Heavily inspired by FlareRebellion/DarkHazard-v1.3-24b and ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B.

I would personally thank you sleepdeprived3 for the amazing finetunes, DoppelReflEx for giving me the dream to make a merge model someday, and the people in BeaverAI Club for the great inspirations.

Luv <3

Recommended Usage

Use Mistral-V7-Tekken-T4!

I personally use the nsigma 1.5 and temp 4 with the rest neutralized. A bit silly yeah, but add just a tiny bit of min_p (if you want) and then turn up the XTC and DRY.

Also try this one too Mistral-V7-Tekken-T5-XML, the system prompt is very nice.

Safety

erm... :3

Quants

Thanks for Artus for providing the Q8 GGUF quants here:
https://huggingface.co/ArtusDev/yvvki_Erotophobia-24B-v1.1-GGUF

Other quants is still waiting...

Merge Details

This is a merge of pre-trained language models created using mergekit.

Merge Method

This model was merged using the DARE TIES merge method using cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: dare_ties
base_model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
tokenizer:
  source: union
chat_template: auto
models:
  - model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition # uncensored
  - model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b # personality
    parameters:
      weight: 0.3
  - model: aixonlab/Eurydice-24b-v3 # creativity & storytelling
    parameters:
      weight: 0.3
  - model: ReadyArt/Omega-Darker_The-Final-Directive-24B # unhinged
    parameters:
      weight: 0.2
  - model: ReadyArt/Forgotten-Safeword-24B-v4.0 # lube
    parameters:
      weight: 0.2
parameters:
  density: 0.3
Downloads last month
363
GGUF
Model size
24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ArtusDev/yvvki_Erotophobia-24B-v1.1-GGUF

Quantized
(12)
this model