Triangle104's picture
Update README.md
2624d7f verified
metadata
base_model: Delta-Vector/Hamanasu-Magnum-QwQ-32B
datasets:
  - NewEden/Orion-LIT
  - NewEden/Orion-Asstr-Stories-16K
  - Mielikki/Erebus-87k
  - NewEden/RP-logs-V2-Experimental-prefixed
  - NewEden/Creative_Writing-Complexity
  - NewEden/Discord-Filtered
  - NewEden/DeepseekRP-Filtered
  - NewEden/Storium-Prefixed-Clean
  - NewEden/Basket-Weaving-Filtered
  - NewEden/LIMARP-Complexity
  - NewEden/Misc-Data-Sharegpt-Prefixed
  - NewEden/BlueSky-10K-Complexity
  - NewEden/OpenCAI-ShareGPT
  - NewEden/Basket-Weaving-Filtered
  - PocketDoc/Dans-Personamaxx-VN
  - PocketDoc/Dans-Kinomaxx-VanillaBackrooms
  - PocketDoc/Dans-Personamaxx-Logs
  - anthracite-org/kalo-opus-instruct-22k-no-refusal
  - lodrick-the-lafted/kalo-opus-instruct-3k-filtered
  - anthracite-org/nopm_claude_writing_fixed
  - anthracite-org/kalo_opus_misc_240827
  - anthracite-org/kalo_misc_part2
  - NewEden/Claude-Instruct-5K
  - NewEden/Claude-Instruct-2.7K
tags:
  - qwen
  - roleplay
  - finetune
  - storywriting
  - llama-cpp
  - gguf-my-repo
thumbnail: >-
  https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/jg2NWmCUfPyzizm2USjMt.jpeg

Triangle104/Hamanasu-Magnum-QwQ-32B-Q4_K_M-GGUF

This model was converted to GGUF format from Delta-Vector/Hamanasu-Magnum-QwQ-32B using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.


This model is a finetune of Hamanasu-QwQ-V2-RP to replicate the prose of Claude models, Opus and Sonnet. Read more about the model's training on my blog : https://openai-sucks.bearblog.dev/. The model is suited for traditional RP, All thanks to Ruka-Hamanasu for funding the train.


Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Triangle104/Hamanasu-Magnum-QwQ-32B-Q4_K_M-GGUF --hf-file hamanasu-magnum-qwq-32b-q4_k_m.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Triangle104/Hamanasu-Magnum-QwQ-32B-Q4_K_M-GGUF --hf-file hamanasu-magnum-qwq-32b-q4_k_m.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo Triangle104/Hamanasu-Magnum-QwQ-32B-Q4_K_M-GGUF --hf-file hamanasu-magnum-qwq-32b-q4_k_m.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo Triangle104/Hamanasu-Magnum-QwQ-32B-Q4_K_M-GGUF --hf-file hamanasu-magnum-qwq-32b-q4_k_m.gguf -c 2048