FLUX.2-dev-GGUF
This is a GGUF-quantized version of the original FLUX.2-dev model.
How to Use (ComfyUI)
- Install the GGUF loader for ComfyUI:
https://github.com/city96/ComfyUI-GGUF - Add the UNet Loader (GGUF) node.
- Select
FLUX.2-dev-GGUF.ggufand use it as the UNet in your pipeline. - Example workflow is included here:
flux2_example_GGUF.json
License
Uses the same license as the original FLUX.2-dev model.
Source license: https://huggingface.co/black-forest-labs/FLUX.2-dev/blob/main/LICENSE.md
- Downloads last month
- 23,482
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support