Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
481.6
TFLOPS
15
44
76
Edd
Erland
Follow
ilovejs's profile picture
shimmyshimmer's profile picture
leloy's profile picture
14 followers
·
63 following
erla_ndpg
Erland366
AI & ML interests
None yet
Recent Activity
reacted
to
danielhanchen
's
post
with 🤗
about 10 hours ago
You can now run Kimi K2 Thinking locally with our Dynamic 1-bit GGUFs: https://huggingface.co/unsloth/Kimi-K2-Thinking-GGUF We shrank the 1T model to 245GB (-62%) & retained ~85% of accuracy on Aider Polyglot. Run on >247GB RAM for fast inference. We also collaborated with the Moonshot AI Kimi team on a system prompt fix! 🥰 Guide + fix details: https://docs.unsloth.ai/models/kimi-k2-thinking-how-to-run-locally
reacted
to
danielhanchen
's
post
with 🚀
about 10 hours ago
You can now run Kimi K2 Thinking locally with our Dynamic 1-bit GGUFs: https://huggingface.co/unsloth/Kimi-K2-Thinking-GGUF We shrank the 1T model to 245GB (-62%) & retained ~85% of accuracy on Aider Polyglot. Run on >247GB RAM for fast inference. We also collaborated with the Moonshot AI Kimi team on a system prompt fix! 🥰 Guide + fix details: https://docs.unsloth.ai/models/kimi-k2-thinking-how-to-run-locally
reacted
to
danielhanchen
's
post
with ❤️
about 10 hours ago
You can now run Kimi K2 Thinking locally with our Dynamic 1-bit GGUFs: https://huggingface.co/unsloth/Kimi-K2-Thinking-GGUF We shrank the 1T model to 245GB (-62%) & retained ~85% of accuracy on Aider Polyglot. Run on >247GB RAM for fast inference. We also collaborated with the Moonshot AI Kimi team on a system prompt fix! 🥰 Guide + fix details: https://docs.unsloth.ai/models/kimi-k2-thinking-how-to-run-locally
View all activity
Organizations
None yet
Erland
's models
175
Sort: Recently updated
Erland/LlaMA-3.2-1B-Instruct
Text Generation
•
1B
•
Updated
6 days ago
•
792
Erland/gemma-3-270m-it
Text Generation
•
0.3B
•
Updated
Sep 10
•
11
Erland/MTP-120M
0.1B
•
Updated
Jun 13
•
1
Erland/mtp-120M-4096-batch16-steps100000-20250613-111312
Updated
Jun 13
Erland/mtp-120M-4096-batch16-steps100000-20250613-110851
Updated
Jun 13
Erland/mtp-120M-4096-batch16-steps100000-20250613-110542
Updated
Jun 13
Erland/mtp-120M-4096-batch16-steps100000-20250613-105826
Updated
Jun 13
Erland/Qwen3Softpick-8B-Base
Text Generation
•
8B
•
Updated
Jun 4
•
1
Erland/gpt2-tokenizer
Updated
Jun 3
Erland/DeepSeek-R1-0528-Qwen3-8B
Text Generation
•
8B
•
Updated
May 29
Erland/vanilla-1.8B-4096-model-HQQ-3bit
Text Generation
•
Updated
May 29
•
1
Erland/vanilla-1.8B-4096-model-HQQ-2bit
Text Generation
•
Updated
May 29
Erland/softpick-1.8B-4096-model-HQQ-3bit
Text Generation
•
Updated
May 29
•
4
Erland/softpick-1.8B-4096-model-HQQ-2bit
Text Generation
•
Updated
May 29
•
3
Erland/vanilla-340M-4096-model-HQQ-3bit
Text Generation
•
Updated
May 29
•
1
Erland/Qwen2-0.5B-SFT-lora-default
Updated
May 27
Erland/softpick-1.8B-4096-model-GPTQ-2bit
Text Generation
•
0.2B
•
Updated
May 27
Erland/softpick-1.8B-4096-model-GPTQ-4bit
Text Generation
•
0.4B
•
Updated
May 27
Erland/softpick-1.8B-4096-model-GPTQ-3bit
Text Generation
•
0.3B
•
Updated
May 27
•
4
Erland/softpick-1.8B-4096-model-GPTQ-8bit
Text Generation
•
0.6B
•
Updated
May 27
•
1
Erland/vanilla-1.8B-4096-model-GPTQ-2bit
Text Generation
•
0.2B
•
Updated
May 27
Erland/vanilla-1.8B-4096-model-GPTQ-4bit
Text Generation
•
0.4B
•
Updated
May 27
•
1
Erland/vanilla-1.8B-4096-model-GPTQ-3bit
Text Generation
•
0.3B
•
Updated
May 27
•
1
Erland/vanilla-1.8B-4096-model-GPTQ-8bit
Text Generation
•
0.6B
•
Updated
May 27
Erland/vanilla-340M-4096-model-GPTQ-4bit
Text Generation
•
0.1B
•
Updated
May 27
Erland/vanilla-1.8B-4096-model-AO-W4
Text Generation
•
Updated
May 22
•
1
Erland/vanilla-1.8B-4096-model-AO-W4A4
Text Generation
•
Updated
May 22
•
1
Erland/vanilla-1.8B-4096-model
Text Generation
•
2B
•
Updated
May 22
Erland/softpick-1.8B-4096-model-AO-W4
Text Generation
•
Updated
May 22
•
7
Erland/softpick-1.8B-4096-model-AO-W4A4
Text Generation
•
Updated
May 22
•
2
Previous
1
2
3
...
6
Next