--- base_model: - nvidia/Llama-3.1-Nemotron-70B-Instruct-HF - NousResearch/Hermes-3-Llama-3.1-70B - SicariusSicariiStuff/Negative_LLAMA_70B - watt-ai/watt-tool-70B - huihui-ai/Llama-3.3-70B-Instruct-abliterated library_name: transformers tags: - mergekit - merge --- # MERGE2 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [huihui-ai/Llama-3.3-70B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated) as a base. ### Models Merged The following models were included in the merge: * [nvidia/Llama-3.1-Nemotron-70B-Instruct-HF](https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF) * [NousResearch/Hermes-3-Llama-3.1-70B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-70B) * [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B) * [watt-ai/watt-tool-70B](https://huggingface.co/watt-ai/watt-tool-70B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: SicariusSicariiStuff/Negative_LLAMA_70B - model: watt-ai/watt-tool-70B - model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF - model: NousResearch/Hermes-3-Llama-3.1-70B base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated merge_method: model_stock dtype: float32 out_dtype: bfloat16 chat_template: llama3 tokenizer: source: base pad_to_multiple_of: 8 ```