Chroma111 commited on
Commit
4b787a8
·
verified ·
1 Parent(s): b898e5e

Upload flux-ip-adapter-v2/README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. flux-ip-adapter-v2/README.md +66 -0
flux-ip-adapter-v2/README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - CaptionEmporium/coyo-hd-11m-llavanext
5
+ - CortexLM/midjourney-v6
6
+ language:
7
+ - en
8
+ base_model:
9
+ - black-forest-labs/FLUX.1-dev
10
+ pipeline_tag: image-to-image
11
+ library_name: diffusers
12
+ ---
13
+
14
+ <img src="assets/banner-dark.png?raw=true" alt="Banner Picture 1" style="width:1024px;"/>
15
+ <a href="https://discord.gg/FHY2guThfy">
16
+ <img src="https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/light/join-our-discord-rev1.png?raw=true" style="width:1024px;"/>
17
+ </a>
18
+ <img src="assets/ip_adapter_0.jpg?raw=true" alt="example_0" style="width:1024px;"/>
19
+ <img src="assets/mona_workflow.jpg?raw=true" alt="Mona Anime Workflow 1" style="width:1024px;"/>
20
+
21
+ This repository provides a IP-Adapter checkpoint for
22
+ [FLUX.1-dev model](https://huggingface.co/black-forest-labs/FLUX.1-dev) by Black Forest Labs
23
+
24
+ [See our github](https://github.com/XLabs-AI/x-flux-comfyui) for comfy ui workflows.
25
+
26
+ # Models
27
+ The IP adapter is trained on a resolution of 512x512 for 150k steps and 1024x1024 for 350k steps while maintaining the aspect ratio.
28
+ We release **v2 version** - which can be used directly in ComfyUI!
29
+
30
+ Please, see our [ComfyUI custom nodes installation guide](https://github.com/XLabs-AI/x-flux-comfyui)
31
+
32
+ # Examples
33
+
34
+ See examples of our models results below.
35
+ Also, some generation results with input images are provided in "Files and versions"
36
+
37
+ # Inference
38
+
39
+ To try our models, you have 2 options:
40
+ 1. Use main.py from our [official repo](https://github.com/XLabs-AI/x-flux)
41
+ 2. Use our custom nodes for ComfyUI and test it with provided workflows (check out folder /workflows)
42
+
43
+ ## Instruction for ComfyUI
44
+ 1. Go to ComfyUI/custom_nodes
45
+ 2. Clone [x-flux-comfyui](https://github.com/XLabs-AI/x-flux-comfyui.git), path should be ComfyUI/custom_nodes/x-flux-comfyui/*, where * is all the files in this repo
46
+ 3. Go to ComfyUI/custom_nodes/x-flux-comfyui/ and run python setup.py
47
+ 4. Update x-flux-comfy with `git pull` or reinstall it.
48
+ 5. Download Clip-L `model.safetensors` from [OpenAI VIT CLIP large](https://huggingface.co/openai/clip-vit-large-patch14), and put it to `ComfyUI/models/clip_vision/*`.
49
+ 6. Download our IPAdapter from [huggingface](https://huggingface.co/XLabs-AI/flux-ip-adapter/tree/main), and put it to `ComfyUI/models/xlabs/ipadapters/*`.
50
+ 7. Use `Flux Load IPAdapter` and `Apply Flux IPAdapter` nodes, choose right CLIP model and enjoy your genereations.
51
+ 8. You can find example workflow in folder workflows in this repo.
52
+
53
+ If you get bad results, try to set to play with ip strength
54
+ ### Limitations
55
+ The IP Adapter is currently in beta.
56
+ We do not guarantee that you will get a good result right away, it may take more attempts to get a result.
57
+ <img src="assets/ip_adapter_2.jpg?raw=true" alt="example_2" style="width:1024px;"/>
58
+ <img src="assets/ip_adapter_3.jpg?raw=true" alt="example_3" style="width:1024px;"/>
59
+ <img src="assets/ip_adapter_1.jpg?raw=true" alt="example_1" style="width:1024px;"/>
60
+ <img src="assets/ip_adapter_4.jpg?raw=true" alt="example_4" style="width:1024px;"/>
61
+ <img src="assets/ip_adapter_5.jpg?raw=true" alt="example_5" style="width:1024px;"/>
62
+ <img src="assets/ip_adapter_6.jpg?raw=true" alt="example_6" style="width:1024px;"/>
63
+
64
+
65
+ ## License
66
+ Our weights fall under the [FLUX.1 [dev]](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) Non-Commercial License<br/>