Spaces:
Running
Running
Delete unused files , fix documentation
Browse files- README.md +3 -9
- docs/TRAINING.md +0 -6
- src/model/generator.py +0 -5
- src/utils/image_processor.py +1 -5
README.md
CHANGED
|
@@ -20,7 +20,7 @@ python_version: "3.11"
|
|
| 20 |
[](https://opensource.org/licenses/MIT)
|
| 21 |
[](https://huggingface.co/MJaheen/Pepe_The_Frog_model_v1_lora)
|
| 22 |
|
| 23 |
-
[Demo](https://huggingface.co/spaces/MJaheen/Pepe-Meme-Generator) β’ [Documentation](./docs/) β’ [Training Guide](./docs/TRAINING.md) β’ [Report Bug](https://github.com/
|
| 24 |
|
| 25 |
</div>
|
| 26 |
|
|
@@ -33,12 +33,12 @@ python_version: "3.11"
|
|
| 33 |
- [Installation](#-installation)
|
| 34 |
- [Usage](#-usage)
|
| 35 |
- [Model Information](#-model-information)
|
| 36 |
-
- [Performance Optimization](#-performance-optimization)
|
| 37 |
- [Project Structure](#-project-structure)
|
| 38 |
- [Training](#-training-your-own-model)
|
| 39 |
- [Contributing](#-contributing)
|
| 40 |
- [License](#-license)
|
| 41 |
- [Acknowledgments](#-acknowledgments)
|
|
|
|
| 42 |
|
| 43 |
---
|
| 44 |
|
|
@@ -227,8 +227,6 @@ pepe-meme-generator/
|
|
| 227 |
β βββTRAINING.md # Model training guide
|
| 228 |
βββ models/ # Downloaded models (gitignored)
|
| 229 |
βββ outputs/ # Generated images (gitignored)
|
| 230 |
-
βββ scripts/ # Utility scripts
|
| 231 |
-
βββ tests/ # Test files
|
| 232 |
βββ diffusion_model_finetuning.ipynb # Training notebook
|
| 233 |
βββ requirements.txt # Python dependencies
|
| 234 |
βββ .gitignore # Git ignore rules
|
|
@@ -288,7 +286,6 @@ Or check out the **[diffusion_model_finetuning.ipynb](./diffusion_model_finetuni
|
|
| 288 |
- **Stable Diffusion 1.5** - Base diffusion model
|
| 289 |
- **LoRA** - Low-Rank Adaptation for efficient fine-tuning
|
| 290 |
- **LCM** - Latent Consistency Model for fast inference
|
| 291 |
-
- **DPM Solver** - Fast diffusion sampling
|
| 292 |
|
| 293 |
### Image Processing
|
| 294 |
- **Pillow (PIL)** - Image manipulation
|
|
@@ -315,7 +312,7 @@ Contributions are welcome! Here's how you can help:
|
|
| 315 |
|
| 316 |
```bash
|
| 317 |
# Clone and setup
|
| 318 |
-
git clone https://github.com/
|
| 319 |
cd pepe-meme-generator
|
| 320 |
python -m venv venv
|
| 321 |
source venv/bin/activate
|
|
@@ -340,9 +337,6 @@ streamlit run src/app.py
|
|
| 340 |
**Issue**: Slow generation on CPU
|
| 341 |
**Solution**: Use "Pepe + LCM (FAST)" model with 6 steps
|
| 342 |
|
| 343 |
-
**Issue**: Model not loading
|
| 344 |
-
**Solution**: Clear Streamlit cache with "Clear Cache & Reload" button
|
| 345 |
-
|
| 346 |
**Issue**: Import errors
|
| 347 |
**Solution**: Reinstall dependencies: `pip install -r requirements.txt --force-reinstall`
|
| 348 |
|
|
|
|
| 20 |
[](https://opensource.org/licenses/MIT)
|
| 21 |
[](https://huggingface.co/MJaheen/Pepe_The_Frog_model_v1_lora)
|
| 22 |
|
| 23 |
+
[Demo](https://huggingface.co/spaces/MJaheen/Pepe-Meme-Generator) β’ [Documentation](./docs/) β’ [Training Guide](./docs/TRAINING.md) β’ [Report Bug](https://github.com/MJaheen/-Pepe-Meme-Generator-/issues)
|
| 24 |
|
| 25 |
</div>
|
| 26 |
|
|
|
|
| 33 |
- [Installation](#-installation)
|
| 34 |
- [Usage](#-usage)
|
| 35 |
- [Model Information](#-model-information)
|
|
|
|
| 36 |
- [Project Structure](#-project-structure)
|
| 37 |
- [Training](#-training-your-own-model)
|
| 38 |
- [Contributing](#-contributing)
|
| 39 |
- [License](#-license)
|
| 40 |
- [Acknowledgments](#-acknowledgments)
|
| 41 |
+
- [Contact & Support](#-contact--support)
|
| 42 |
|
| 43 |
---
|
| 44 |
|
|
|
|
| 227 |
β βββTRAINING.md # Model training guide
|
| 228 |
βββ models/ # Downloaded models (gitignored)
|
| 229 |
βββ outputs/ # Generated images (gitignored)
|
|
|
|
|
|
|
| 230 |
βββ diffusion_model_finetuning.ipynb # Training notebook
|
| 231 |
βββ requirements.txt # Python dependencies
|
| 232 |
βββ .gitignore # Git ignore rules
|
|
|
|
| 286 |
- **Stable Diffusion 1.5** - Base diffusion model
|
| 287 |
- **LoRA** - Low-Rank Adaptation for efficient fine-tuning
|
| 288 |
- **LCM** - Latent Consistency Model for fast inference
|
|
|
|
| 289 |
|
| 290 |
### Image Processing
|
| 291 |
- **Pillow (PIL)** - Image manipulation
|
|
|
|
| 312 |
|
| 313 |
```bash
|
| 314 |
# Clone and setup
|
| 315 |
+
git clone https://https://github.com/MJaheen/-Pepe-Meme-Generator-
|
| 316 |
cd pepe-meme-generator
|
| 317 |
python -m venv venv
|
| 318 |
source venv/bin/activate
|
|
|
|
| 337 |
**Issue**: Slow generation on CPU
|
| 338 |
**Solution**: Use "Pepe + LCM (FAST)" model with 6 steps
|
| 339 |
|
|
|
|
|
|
|
|
|
|
| 340 |
**Issue**: Import errors
|
| 341 |
**Solution**: Reinstall dependencies: `pip install -r requirements.txt --force-reinstall`
|
| 342 |
|
docs/TRAINING.md
CHANGED
|
@@ -51,12 +51,6 @@ This guide covers how to fine-tune your own Stable Diffusion model using LoRA (L
|
|
| 51 |
- RAM: 32GB system RAM
|
| 52 |
- Storage: 50GB+ SSD
|
| 53 |
|
| 54 |
-
**Cloud Options**:
|
| 55 |
-
- Google Colab (Free T4 GPU)
|
| 56 |
-
- Kaggle Notebooks (Free GPU)
|
| 57 |
-
- Lambda Labs
|
| 58 |
-
- RunPod
|
| 59 |
-
- Vast.ai
|
| 60 |
|
| 61 |
### Software Requirements
|
| 62 |
|
|
|
|
| 51 |
- RAM: 32GB system RAM
|
| 52 |
- Storage: 50GB+ SSD
|
| 53 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
|
| 55 |
### Software Requirements
|
| 56 |
|
src/model/generator.py
CHANGED
|
@@ -57,11 +57,6 @@ class PepeGenerator:
|
|
| 57 |
|
| 58 |
Args:
|
| 59 |
config: ModelConfig instance. If None, uses default configuration.
|
| 60 |
-
|
| 61 |
-
Example:
|
| 62 |
-
>>> config = ModelConfig()
|
| 63 |
-
>>> config.USE_LCM = True # Enable fast generation
|
| 64 |
-
>>> generator = PepeGenerator(config)
|
| 65 |
"""
|
| 66 |
self.config = config or ModelConfig()
|
| 67 |
self.device = self._get_device(self.config.FORCE_CPU)
|
|
|
|
| 57 |
|
| 58 |
Args:
|
| 59 |
config: ModelConfig instance. If None, uses default configuration.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 60 |
"""
|
| 61 |
self.config = config or ModelConfig()
|
| 62 |
self.device = self._get_device(self.config.FORCE_CPU)
|
src/utils/image_processor.py
CHANGED
|
@@ -224,11 +224,7 @@ class ImageProcessor:
|
|
| 224 |
|
| 225 |
Returns:
|
| 226 |
Enhanced PIL Image (modified in-place)
|
| 227 |
-
|
| 228 |
-
Example:
|
| 229 |
-
>>> image = Image.open("soft_image.png")
|
| 230 |
-
>>> enhanced = ImageProcessor.enhance_image(image, sharpness=1.3, contrast=1.2)
|
| 231 |
-
>>> enhanced.save("sharp_image.png")
|
| 232 |
"""
|
| 233 |
|
| 234 |
# Sharpen
|
|
|
|
| 224 |
|
| 225 |
Returns:
|
| 226 |
Enhanced PIL Image (modified in-place)
|
| 227 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
| 228 |
"""
|
| 229 |
|
| 230 |
# Sharpen
|