YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Speaks much more naturally than V2 and handles long context better, handles multiple speakers better as well. Moved to Q8 quant instead of Q4.
You can try out Luna's newest fine tuned model here (v5) - https://lunapotato.com - trying out new settings so v5 may be worse than this one. If you care enough after testing it out email "[email protected]" with what you found to work best and issues it had, I would love that.
Examples of conversations with luna below -
From the website:
On discord:
This model is a LoRa fine tuned version of gemma3 4b. Trained on FineTome, massive reddit dataset, and scraped DM's from my friends and I.
- Downloads last month
- 27
Hardware compatibility
Log In
to view the estimation
We're not able to determine the quantization variants.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support

