Anime Face Diffusion Model
A diffusion model trained to generate 64x64 anime-style faces.
Model Details
- Model type: Denoising Diffusion Probabilistic Model (DDPM)
- Training data: Anime faces dataset
- Image size: 64x64 RGB
- Timesteps: 1000
- Architecture: U-Net with sinusoidal time embeddings
Usage
import torch
from diffusion_model import SimpleUnet, Diffuser # Your model code
# Load the model
checkpoint = torch.load("model.pt")
model = SimpleUnet(in_channels=3)
model.load_state_dict(checkpoint['ema_model_state_dict']) # Use EMA weights
model.eval()
# Generate samples
# (Add your sampling code here)
Training Details
- Trained for X epochs
- Batch size: 64
- Learning rate: 1e-4
- EMA decay: 0.9999
- Noise schedule: Cosine
Samples
(Add generated sample images here)
Citation
If you use this model, please cite:
@misc{your-anime-diffusion,
author = {Your Name},
title = {Anime Face Diffusion Model},
year = {2024},
publisher = {HuggingFace},
url = {https://huggingface.co/your-username/anime-diffusion}
}
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support