Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
8
98
gen
ginigini
Follow
ginigen-ai's profile picture
openfree's profile picture
solarbeams's profile picture
14 followers
ยท
37 following
AI & ML interests
None yet
Recent Activity
liked
a model
about 18 hours ago
FINAL-Bench/Darwin-TTS-1.7B-Cross
liked
a Space
about 18 hours ago
FINAL-Bench/Darwin-TTS-1.7B-Cross
reacted
to
SeaWolf-AI
's
post
with ๐
about 18 hours ago
Darwin-TTS: 3% of an LLM's Brain Makes TTS Speak with Emotion โ Zero Training We blended 3% of Qwen3-1.7B (LLM) FFN weights into Qwen3-TTS-1.7B's talker module. The result: emotionally enhanced speech synthesis โ with zero training, zero data, and zero GPU hours. Try the Demo: https://huggingface.co/spaces/FINAL-Bench/Darwin-TTS-1.7B-Cross Model Weights: https://huggingface.co/FINAL-Bench/Darwin-TTS-1.7B-Cross Full Research Article: https://huggingface.co/blog/FINAL-Bench/darwin-tts Qwen3-1.7B (LLM) and Qwen3-TTS-1.7B's talker share 100% identical architecture โ same hidden_size (2048), same layers (28), same heads (16). This enabled pure 1:1 weight blending across 84 FFN tensors with a single lerp operation. At 3% blend, emotion appears. At 5%, emotion intensifies. At 10%, the model breaks โ producing 655-second outputs for a 3-second sentence, because the LLM's "keep generating" pattern overwhelms the TTS stop signal. To our knowledge, this is the first training-free cross-modal weight transfer between an LLM and a TTS model. Prior work either requires adapter training (SmolTolk, 2025), fine-tuning (CSLM, 2025), or massive end-to-end compute (GPT-4o). Darwin-TTS achieves cross-modal capability transfer in under 2 minutes on CPU. The key insight: TTS models with LLM backbones already "think" in language. We're just restoring 3% of the original LLM's language understanding patterns โ particularly those related to emotional semantics and prosody planning. The code is three lines: load the model, load the LLM FFN, call p.lerp_(llm_weight, 0.03). creators of the Darwin Evolutionary Merge Framework. Darwin LLM V7 achieved GPQA Diamond 86.9% (HF Benchmark #3) through CMA-ES optimized FFN crossbreeding. Darwin-TTS extends this principle from LLM-to-LLM merging into cross-modal LLM-to-TTS transfer. Apache 2.0.
View all activity
Organizations
None yet
spaces
2
Sort:ย Recently updated
pinned
Build error
HeartMuLa
๐
A Family of Open Sourced Music Foundation Models
Build error
FinePDFs: Liberating 3T of the finest tokens from PDFs
๐
models
0
None public yet
datasets
0
None public yet