Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
TheBloke
/
MetaMath-Mistral-7B-GPTQ
like
3
Text Generation
Transformers
Safetensors
meta-math/MetaMathQA
mistral
text-generation-inference
4-bit precision
gptq
arXiv:
2309.12284
arXiv:
2310.06825
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
1
Deploy
Use this model
gptq-8bit--1g-actorder_True
MetaMath-Mistral-7B-GPTQ
/
tokenizer.model
Commit History
GPTQ model commit
5ca51ff
TheBloke
commited on
Oct 31, 2023