Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
steampunque
/
GLM-Z1-9B-0414-Hybrid-GGUF
like
1
GGUF
GLM Z1 9B 0414
GGUF
quantized
6-bit
4-bit precision
conversational
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
GLM-Z1-9B-0414-Hybrid-GGUF
20.2 GB
1 contributor
History:
9 commits
steampunque
Update README.md
5a99441
verified
2 days ago
.gitattributes
1.71 kB
Upload GLM-Z1-9B-0414.Q4_P_H.gguf with huggingface_hub
2 days ago
GLM-Z1-9B-0414.Q4_K_H.gguf
6.63 GB
xet
Upload GLM-Z1-9B-0414.Q4_K_H.gguf with huggingface_hub
3 days ago
GLM-Z1-9B-0414.Q4_P_H.gguf
6.28 GB
xet
Upload GLM-Z1-9B-0414.Q4_P_H.gguf with huggingface_hub
2 days ago
GLM-Z1-9B-0414.Q6_K_H.gguf
7.33 GB
xet
Upload GLM-Z1-9B-0414.Q6_K_H.gguf with huggingface_hub
5 months ago
README.md
7.04 kB
Update README.md
2 days ago