--- base_model: - allura-org/Qwen2.5-32b-RP-Ink - Phr00t/Phr00tyMix-v3-32B - nicoboss/DeepSeek-R1-Distill-Qwen-32B-Uncensored - Delta-Vector/Archaeo-32B-KTO - arcee-ai/Virtuoso-Medium-v2 - Phr00t/Phr00tyMix-v2-32B library_name: transformers tags: - mergekit - merge --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/631be8402ea8535ea48abbc6/zPB8Pmb6m4H7bKGw-dpmV.png) # Phr00tyMix-v4-32B Phr00tyMix-v3 did increase creativity, but at the expense of some of its instruction following and coherency. This mix is intended to fix that, which should improve its storytelling and obediency. This model is still very creative, uncensored (when asked to be) and smart. This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Phr00t/Phr00tyMix-v3-32B](https://huggingface.co/Phr00t/Phr00tyMix-v3-32B) as a base. ### Models Merged The following models were included in the merge: * [allura-org/Qwen2.5-32b-RP-Ink](https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink) * [nicoboss/DeepSeek-R1-Distill-Qwen-32B-Uncensored](https://huggingface.co/nicoboss/DeepSeek-R1-Distill-Qwen-32B-Uncensored) * [Delta-Vector/Archaeo-32B-KTO](https://huggingface.co/Delta-Vector/Archaeo-32B-KTO) * [arcee-ai/Virtuoso-Medium-v2](https://huggingface.co/arcee-ai/Virtuoso-Medium-v2) * [Phr00t/Phr00tyMix-v2-32B](https://huggingface.co/Phr00t/Phr00tyMix-v2-32B) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: model_stock base_model: Phr00t/Phr00tyMix-v3-32B dtype: bfloat16 models: - model: Delta-Vector/Archaeo-32B-KTO - model: allura-org/Qwen2.5-32b-RP-Ink - model: arcee-ai/Virtuoso-Medium-v2 - model: Phr00t/Phr00tyMix-v2-32B - model: nicoboss/DeepSeek-R1-Distill-Qwen-32B-Uncensored tokenizer: source: "Delta-Vector/Archaeo-32B-KTO" ```