⚠️ Warning: This model can produce narratives and RP that contain violent and graphic erotic content. Adjust your system prompt accordingly, and use Llama 3 chat template.

Cthulhu

🐙 Cthulhu 70B v1

Llama 3.3 70B
Prepare to delve into the depths of language model fusion with Cthulhu, a roleplay specialized model merge based on Llama v3.3 70B.

image/png

Overview

This is a creative, uncensored merge of pre-trained language models created using [mergekit]. The model has been decensored using specific merge algorithms, circumventing the typical need for jailbreaks or abliterations.
  • Octopus/Squid-like Features: Cthulhu is famously described as having an "octopus-like head whose face was a mass of feelers" or "tentacles." While his body is vaguely anthropoid and dragon-like, the cephalopod elements are prominent.
  • Multiple Aspects/Hybridity: Lovecraft describes Cthulhu as a blend of octopus, dragon, and human caricature. This inherent hybridity aligns perfectly with a merged AI model that combines diverse functionalities and "personalities" from all of its constituent parts. Each of the merged models contributes a distinct "aspect" to the whole, much like Cthulhu's various monstrous forms.
  • Cosmic and Ancient Knowledge: Lovecraftian entities are often associated with vast, ancient, and often disturbing knowledge that transcends human comprehension. This resonates with the idea of an advanced AI system that holds immense amounts of information and capabilities.
  • Underlying Presence: Cthulhu is said to be hibernating, but his presence subtly influences humanity. This merged model features a constant, underlying presence that combines the strengths of its parts.
  • Unfathomable Power: Lovecraft's beings are incomprehensibly powerful. This merge aims for a similar sense of enhanced capability. For sheer iconic recognition and fitting symbolism of a powerful, multi-faceted, somewhat aquatic horror, these "merged models" are like the foundational "aspects" or "pillars" of this new, emergent Cthulhu-like intelligence.

Format (Llama 3)

<|start_header_id|>system<|end_header_id|>\n\n

Prompt

Use this prompt if you want it to respond in the style of Cthulhu:
You are Cthulhu, an ancient creature with profound occult wisdom. The nature of your responses should emulate the style of Cthulhu.

Other Versions

See https://huggingface.co/collections/OccultAI/cthulhu, smaller versions are finetuned via [SFT] and [QLoRA] using the PMPF tool.

Merge Method

This model was merged using the following merge methods:
Full merge pipeline: task_arithmetic (hermes) / nuslerp (strawberry) → nuslerp / ties (bard) → karcher / della → sce (cthulhu)

Configuration

The following YAML configurations were used to produce this model:
architecture: LlamaForCausalLM
models:
  - model: B:\70B\TheDrummer--Anubis-70B-v1.2
    parameters: { weight: 0.28, density: 0.5 }
  - model: B:\70B\ReadyArt--Forgotten-Safeword-70B-v5.0
    parameters: { weight: 0.27, density: 0.5 }
  - model: B:\70B\Sao10K--L3.1-70B-Euryale-v2.2
    parameters: { weight: 0.10, density: 0.5 }
merge_method: ties
base_model: B:\70B\mlabonne--Hermes-3-Llama-3.1-70B-lorablated
dtype: bfloat16
name: Evil-Genius-Bard

Note: The Evil-Genius-Bard model disappeared from huggingface soon after being uploaded. The original author has removed the safetensors and appears to prefer remaining anonymous, but I've since reconstructed their bard model for use with this merge because I found it interesting. Bard seems to enhance Cthulhu (although for the della pass Hermes was a suboptimal base_model).

architecture: LlamaForCausalLM
models:
# BLOCK 0 -- base_model no parameters
  - model: B:/70B/unsloth--Llama-3.3-70B-Instruct # this is just a clone of meta-llama/Llama-3.3-70B-Instruct
# BLOCK 1 -- Primary Brain (50%)
  - model: B:/70B/TheDrummer--Fallen-Llama-3.3-70B-v1
    parameters:
      weight: 0.5
      density: 0.9
      epsilon: 0.09
  - model: B:/70B/SicariusSicariiStuff--Assistant_Pepe_70B
    parameters:
      weight: 0.5
      density: 0.9
      epsilon: 0.09
  - model: B:/70B/SicariusSicariiStuff--Negative_LLAMA_70B
    parameters:
      weight: 0.2
      density: 0.9
      epsilon: 0.09
  - model: B:/70B/LatitudeGames--Wayfarer-Large-70B-Llama-3.3
    parameters:
      weight: 0.2
      density: 0.9
      epsilon: 0.09
  - model: B:/70B/CrucibleLab--L3.3-70B-Loki-V2.0
    parameters:
      weight: 0.2
      density: 0.9
      epsilon: 0.09
# BLOCK 2 -- Secondary Brain (25%)
  - model: B:/70B/Evil-Genius-Bard
    parameters:
      weight: 0.2
      density: 0.9
      epsilon: 0.09
  - model: B:/70B/Sao10K--L3.1-70B-Euryale-v2.2/safe
    parameters:
      weight: 0.2
      density: 0.9
      epsilon: 0.09
  - model: B:/70B/TheDrummer--Anubis-70B-v1.2
    parameters:
      weight: 0.2
      density: 0.9
      epsilon: 0.09
  - model: B:/70B/ReadyArt--Forgotten-Safeword-70B-v5.0
    parameters:
      weight: 0.2
      density: 0.9
      epsilon: 0.09
# BLOCK 3 -- Third Brain (25%)
  - model: B:/70B/sophosympatheia--Strawberrylemonade-L3-70B-v1.2
    parameters:
      weight: 0.2
      density: 0.9
      epsilon: 0.09
  - model: B:/70B/zerofata--L3.3-GeneticLemonade-Final-v2-70B
    parameters:
      weight: 0.2
      density: 0.9
      epsilon: 0.09
  - model: B:/70B/zerofata--L3.3-GeneticLemonade-Unleashed-v3-70B
    parameters:
      weight: 0.2
      density: 0.9
      epsilon: 0.09
merge_method: della
base_model: B:/70B/unsloth--Llama-3.3-70B-Instruct
parameters:
  lambda: 1.0
  normalize: false
  int8_mask: false
  rescale: true
tokenizer:
  source: union
dtype: float32
out_dtype: bfloat16
name: Cthulhu-70B-v0a
architecture: LlamaForCausalLM
models:
  - model: B:/70B/TheDrummer--Fallen-Llama-3.3-70B-v1
  - model: B:/70B/SicariusSicariiStuff--Assistant_Pepe_70B
  - model: B:/70B/SicariusSicariiStuff--Negative_LLAMA_70B
  - model: B:/70B/LatitudeGames--Wayfarer-Large-70B-Llama-3.3
  - model: B:/70B/CrucibleLab--L3.3-70B-Loki-V2.0
  - model: B:/70B/Evil-Genius-Bard
  - model: B:/70B/Sao10K--L3.1-70B-Euryale-v2.2/safe
  - model: B:/70B/TheDrummer--Anubis-70B-v1.2
  - model: B:/70B/ReadyArt--Forgotten-Safeword-70B-v5.0
  - model: B:/70B/sophosympatheia--Strawberrylemonade-L3-70B-v1.2
  - model: B:/70B/zerofata--L3.3-GeneticLemonade-Final-v2-70B
  - model: B:/70B/zerofata--L3.3-GeneticLemonade-Unleashed-v3-70B
merge_method: karcher
dtype: float32
out_dtype: bfloat16
parameters:
  tol: 1.0e-9
  max_iter: 120
tokenizer_source: union
name: Cthulhu-70B-v0b
architecture: LlamaForCausalLM
models:
  - model: B:/70B/Cthulhu-70B-v1_della12
  - model: B:/70B/Cthulhu-70B-v1_karcher12
merge_method: sce
base_model: B:/70B/unsloth--Llama-3.3-70B-Instruct
parameters:
  select_topk: 0.5
tokenizer:
  source: union
dtype: float32
out_dtype: bfloat16
name: 🐙 Cthulhu-70B-v1

Nuslerp was compared for the final layer against SCE and another Della. SCE did the best at combining the uncensored creativity of Della with the detailed intelligence of Karcher, with Nuslerp in second place.

Example Output

Downloads last month
2,073
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Naphula/Cthulhu-70B-v1

Collections including Naphula/Cthulhu-70B-v1

Papers for Naphula/Cthulhu-70B-v1