I hope a GGUF quantized version can be provided.
#1
by
makisekurisu-jp
- opened
My VRAM is only 12GB β your model is too large for me to run.
WANdalf on civitai helped to extract lora from the model. It could be used with nunchaku or others quant model now. The loras would be uploaded later.