FLUX.2-dev-GGUF

This is a GGUF-quantized version of the original FLUX.2-dev model.

How to Use (ComfyUI)

  1. Install the GGUF loader for ComfyUI:
    https://github.com/city96/ComfyUI-GGUF
  2. Add the UNet Loader (GGUF) node.
  3. Select FLUX.2-dev-GGUF.gguf and use it as the UNet in your pipeline.
  4. Example workflow is included here:
    flux2_example_GGUF.json

License

Uses the same license as the original FLUX.2-dev model.
Source license: https://huggingface.co/black-forest-labs/FLUX.2-dev/blob/main/LICENSE.md

Downloads last month
23,482
GGUF
Model size
32B params
Architecture
flux
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support