YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Note, that Sana is a FP32 model, and this gguf is just FP16, not even BF16, so for other quantizations create a FP32 gguf first for better quality.

To use this model/quant you need add Sana support to ComfyUi or GGUF support to Sana custom nodes. Otherwise you will get ValueError: This model is not currently supported - (Unknown model architecture!)

The simplest way if you just need a FP16 variant is to use official quant, or if fp8 is needed - quantize safetensors/pth to it and use without gguf

This can be helpful:

https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/pipelines/sana.md#quantization

https://github.com/NVlabs/Sana/blob/main/asset/docs/quantize/8bit_sana.md

https://github.com/NVlabs/Sana/pull/249

https://github.com/NVlabs/Sana/issues/128

https://github.com/NVlabs/Sana/blob/main/tools/convert_sana_to_svdquant.py and https://github.com/NVlabs/Sana/blob/main/asset/docs/quantize/4bit_sana.md

but this solution is not stable, you can get error like this RuntimeError: The expanded size of the tensor (2240) must match the existing size (1152) at non-singleton dimension 1. Target sizes: [2880, 2240, 1, 1]. Tensor sizes: [2880, 1152, 1, 1] (only with the 592M model), so prepare a workaround for this case. This script just creates a safetensor version of original pth, then you will need to make a SVDQuant from it

probably the most easy way https://huggingface.co/Kijai/flux-fp8/discussions/7

Downloads last month
32
GGUF
Model size
0.6B params
Architecture
undefined
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support