essay generating branch of MHENN. 4-bit quantized model can be found at file "mhennlitQ4_K_M.gguf"

finetuned for 650 steps on an nvidia v100 on a google colab instance. finetuned on the netcat420/quiklit dataset.

https://huggingface.co/mistralai/Mistral-7B-v0.1 <--------- BASE MODEL

Downloads last month
14
Safetensors
Model size
7B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train netcat420/MHENNlit