Gugugo-koen-7B-V1.1-GGUF
Detail repo: https://github.com/jwj7140/Gugugo

This is GGUF model from squarelike/Gugugo-koen-7B-V1.1
Base Model: Llama-2-ko-7b
Training Dataset: sharegpt_deepl_ko_translation.
I trained with 1x A6000 GPUs for 90 hours.
Prompt Template
KO->EN
### νκ΅μ΄: {sentence}</λ>
### μμ΄:
EN->KO
### μμ΄: {sentence}</λ>
### νκ΅μ΄:
- Downloads last month
- 221
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit