--- tags: - ltx-video - image-to-video pinned: true language: - en license: other pipeline_tag: text-to-video library_name: diffusers --- # SceneGen-Finetuned This is a fine-tuned version of [`LTXV_2B_0.9.5`](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.5.safetensors) trained on custom data. ## Model Details - **Base Model:** [`LTXV_2B_0.9.5`](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.5.safetensors) - **Training Type:** LoRA fine-tuning - **Training Steps:** 1500 - **Learning Rate:** 0.0002 - **Batch Size:** 1 ## Sample Outputs | | | | | |:---:|:---:|:---:|:---:| | ![example1](./samples/sample_0.gif)
PromptA chef in a tall white hat is preparing elegant plated desserts with precision, while another man, holding a microphone, is narrating the scene like a TV show host.
| ## Usage This model is designed to be used with the LTXV (Lightricks Text-to-Video) pipeline. ### 🔌 Using Trained LoRAs in ComfyUI In order to use the trained lora in comfy: 1. Copy your comfyui trained LoRA weights (`comfyui..safetensors` file) to the `models/loras` folder in your ComfyUI installation. 2. In your ComfyUI workflow: - Add the "LTXV LoRA Selector" node to choose your LoRA file - Connect it to the "LTXV LoRA Loader" node to apply the LoRA to your generation You can find reference Text-to-Video (T2V) and Image-to-Video (I2V) workflows in the [official LTXV ComfyUI repository](https://github.com/Lightricks/ComfyUI-LTXVideo). ### Example Prompts Example prompts used during validation: - `A chef in a tall white hat is preparing elegant plated desserts with precision, while another man, holding a microphone, is narrating the scene like a TV show host.` This model inherits the license of the base model ([`LTXV_2B_0.9.5`](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.5.safetensors)). ## Acknowledgments - Base model by [Lightricks](https://huggingface.co/Lightricks) - Training infrastructure: [LTX-Video-Trainer](https://github.com/Lightricks/ltx-video-trainer)