Llama-3.1-Ko-8B-magic8 made by "AIJOAH"

The merged model combining Llama-3.1-Korean-8B-Instruct and Llama-3.1-Storm-8B improves performance like โ€” including Korean-language instruction following, multilingual knowledge-based QA, reasoning, reduced hallucinations, and structured output generation (e.g., JSON, Markdown). This merge is particularly beneficial for developers seeking a strong Korean-capable model that also excels in logic, accuracy, and function-calling, while remaining lightweight enough for local inference environments such as Ollama or vLLM.

Merge Method

This model was merged using the DELLA

Models Merged

The following models were included in the merge:

Citation

If you find our work helpful, feel free to give us a cite.

AIJOAH

@misc{aijoah2025merged,
  title        = {Merged Llama-3.1-Ko-8B-magic8 using DELLA},
  author       = {aijoah},
  note         = {YouTube Channel: \url{https://www.youtube.com/@JayLee-gv8tv}},
  year         = {2025},
}

Contact

If you have any questions, please raise an issue or contact us at ([email protected]).

Downloads last month
6
Safetensors
Model size
8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for muzerai/Llama-3.1-KoEn-8B-magic8