Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
TheBloke
/
Mistral-7B-AEZAKMI-v1-GPTQ
like
1
Text Generation
Transformers
Safetensors
mistral
text-generation-inference
4-bit precision
gptq
License:
other
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
gptq-4bit-32g-actorder_True
Mistral-7B-AEZAKMI-v1-GPTQ
4.57 GB
1 contributor
History:
3 commits
TheBloke
Upload README.md
152531b
almost 2 years ago
.gitattributes
1.52 kB
initial commit
almost 2 years ago
LICENSE
0 Bytes
GPTQ model commit
almost 2 years ago
README.md
20.3 kB
Upload README.md
almost 2 years ago
config.json
1.07 kB
GPTQ model commit
almost 2 years ago
generation_config.json
111 Bytes
GPTQ model commit
almost 2 years ago
model.safetensors
4.57 GB
xet
GPTQ model commit
almost 2 years ago
quantize_config.json
185 Bytes
GPTQ model commit
almost 2 years ago
special_tokens_map.json
411 Bytes
GPTQ model commit
almost 2 years ago
tokenizer.json
1.8 MB
GPTQ model commit
almost 2 years ago
tokenizer.model
493 kB
xet
GPTQ model commit
almost 2 years ago
tokenizer_config.json
912 Bytes
GPTQ model commit
almost 2 years ago