Update/Upload model card for transformer
Browse files
README.md
CHANGED
|
@@ -54,7 +54,7 @@ base_model:
|
|
| 54 |
* **Data Efficient:** Uses a fraction of data (e.g., ~1%) compared to other methods.
|
| 55 |
|
| 56 |
<div align="center">
|
| 57 |
-
<img src="resource/abstract_fig.png" width="100%" height="100%"/>
|
| 58 |
</div>
|
| 59 |
|
| 60 |
|
|
@@ -82,7 +82,7 @@ We are excited to release [**TextFlux-beta**](https://huggingface.co/yyyyyxie/te
|
|
| 82 |
|
| 83 |
Considering that single-line editing is a primary use case for many users and generally yields more stable, high-quality results, we have released new weights optimized for this scenario.
|
| 84 |
|
| 85 |
-
Unlike the original model which renders glyphs onto a full-size mask, the beta version utilizes a **single-line image strip** for the glyph condition. This approach not only reduces unnecessary computational overhead but also provides a more stable and high-quality supervisory signal. This leads directly to the significant improvements in both single-line and small text rendering (see example [here](resource/demo_singleline.png)).
|
| 86 |
|
| 87 |
|
| 88 |
To use these new models, please refer to the updated files: demo.py, run_inference.py, and run_inference_lora.py. While the beta models retain the ability to generate multi-line text, we **highly recommend** using them for single-line tasks to achieve the best performance and stability.
|
|
|
|
| 54 |
* **Data Efficient:** Uses a fraction of data (e.g., ~1%) compared to other methods.
|
| 55 |
|
| 56 |
<div align="center">
|
| 57 |
+
<img src="https://github.com/yyyyyxie/textflux/blob/main/resource/abstract_fig.png" width="100%" height="100%"/>
|
| 58 |
</div>
|
| 59 |
|
| 60 |
|
|
|
|
| 82 |
|
| 83 |
Considering that single-line editing is a primary use case for many users and generally yields more stable, high-quality results, we have released new weights optimized for this scenario.
|
| 84 |
|
| 85 |
+
Unlike the original model which renders glyphs onto a full-size mask, the beta version utilizes a **single-line image strip** for the glyph condition. This approach not only reduces unnecessary computational overhead but also provides a more stable and high-quality supervisory signal. This leads directly to the significant improvements in both single-line and small text rendering (see example [here](https://github.com/yyyyyxie/textflux/blob/main/resource/demo_singleline.png)).
|
| 86 |
|
| 87 |
|
| 88 |
To use these new models, please refer to the updated files: demo.py, run_inference.py, and run_inference_lora.py. While the beta models retain the ability to generate multi-line text, we **highly recommend** using them for single-line tasks to achieve the best performance and stability.
|