|
|
--- |
|
|
license: llama3.3 |
|
|
base_model: |
|
|
- meta-llama/Llama-3.3-70B-Instruct |
|
|
language: |
|
|
- en |
|
|
library_name: transformers |
|
|
--- |
|
|
|
|
|
<div align="center"> |
|
|
<img src="https://huggingface.co/Sao10K/Llama-3.3-70B-Vulpecula-r1/resolve/main/senkooo.jpg" alt="Senko" style="border-radius: 10px; max-width: 400px;"> |
|
|
</div> |
|
|
|
|
|
<div style="background: linear-gradient(to bottom, #ffb6c1, #ffe4e8); padding: 20px; border-radius: 15px; margin: 20px 0;"> |
|
|
<h1 align="center" style="color: #d35d6e; font-family: 'Noto Sans JP', sans-serif;">π¦ L3.3-70B-Vulpecula πΈ</h1> |
|
|
|
|
|
<div style="background: rgba(255, 255, 255, 0.7); padding: 20px; border-radius: 10px; margin: 15px 0;"> |
|
|
<p style="color: #444; font-size: 16px;">Hi hi! π</p> |
|
|
<p style="color: #444; font-size: 16px;">This is a collaboration work between <a href="https://huggingface.co/gradientputri" style="color: #d35d6e; text-decoration: none;">GradientPutri</a> and <a href="https://huggingface.co/Sao10K" style="color: #d35d6e; text-decoration: none;">Sao10K</a>.</p> |
|
|
<p style="color: #444; font-size: 16px;">This is a passion project of mine spanning the past few weeks, so we hope you like it.</p> |
|
|
<p style="color: #444; font-size: 16px;">While there may some minor issues, I think the final result is nice, and there are nice outputs which was the main goal.</p> |
|
|
<p style="color: #444; font-size: 16px;">Model card made by <a href="https://huggingface.co/gradientputri" style="color: #d35d6e; text-decoration: none;">GradientPutri</a>.</p> |
|
|
</div> |
|
|
|
|
|
<div style="background: rgba(255, 255, 255, 0.7); padding: 20px; border-radius: 10px; margin: 15px 0;"> |
|
|
<h2 style="color: #d35d6e; border-bottom: 2px solid #ff9eb5; padding-bottom: 10px;">π Licensing Information</h2> |
|
|
<p style="color: #444; font-size: 16px;">This model is based on Meta's Llama 3.3 and is subject to the <a href="https://llama.meta.com/llama3_3/license/" style="color: #d35d6e;">Llama 3.3 Community License Agreement</a> and the <a href="https://www.llama.com/llama3_3/use-policy" style="color: #d35d6e;">Acceptable Use Policy</a>.</p> |
|
|
|
|
|
<div style="background: #fff3f6; padding: 15px; border-radius: 8px; margin: 15px 0; border-left: 4px solid #d35d6e;"> |
|
|
<p style="margin: 0; color: #333; font-weight: 500;"> |
|
|
While we are unable to disallow commercial usage, do note that this is a project made using our own resources, time and effort. I'd rather not be discouraged from doing future project models instead. We kindly request that commercial users reach out before deployment to discuss usage and proper attribution. We appreciate users who help maintain transparency in the AI ecosystem by keeping us informed of how our work is being used. Same goes for any merges or derivatives, hopefully :) |
|
|
</p> |
|
|
</div> |
|
|
</div> |
|
|
|
|
|
<div style="background: rgba(255, 255, 255, 0.7); padding: 20px; border-radius: 10px; margin: 15px 0;"> |
|
|
<h2 style="color: #d35d6e; border-bottom: 2px solid #ff9eb5; padding-bottom: 10px;">π Model Details</h2> |
|
|
<ul style="list-style: none; padding-left: 20px;"> |
|
|
<li style="margin: 10px 0;">π A thinking-based model inspired by Deepseek-R1, trained through both SFT and a little bit of RL on creative writing data.</li> |
|
|
<li style="margin: 10px 0;">π§ Prefill, or begin assistant replies with <think>\n to activate thinking mode, or not. It works well without thinking too.</li> |
|
|
<li style="margin: 10px 0;">π Improved Steerability, instruct-roleplay and creative control over base model.</li> |
|
|
</ul> |
|
|
</div> |
|
|
|
|
|
|
|
|
<div style="background: rgba(255, 255, 255, 0.7); padding: 20px; border-radius: 10px; margin: 15px 0;"> |
|
|
<h2 style="color: #d35d6e; border-bottom: 2px solid #ff9eb5; padding-bottom: 10px;">π Dataset Composition</h2> |
|
|
<ul style="list-style: none; padding-left: 20px;"> |
|
|
<li style="margin: 10px 0;">πΎ Semi-synthetic Chat/Roleplaying datasets that has been re-made, cleaned and filtered for repetition, quality and output.</li> |
|
|
<li style="margin: 10px 0;">π Human-based Natural Chat / Roleplaying datasets cleaned, filtered and checked for quality.</li> |
|
|
<li style="margin: 10px 0;">π Diverse Instruct dataset from a few different LLMs, cleaned and filtered for refusals and quality.</li> |
|
|
<li style="margin: 10px 0;">π Reasoning Traces taken from Deepseek-R1 for Instruct, Chat & Creative Tasks, filtered and cleaned for quality.</li> |
|
|
<li style="margin: 10px 0;">βββ Toxic / Decensorship data was not needed for our purposes, the model is unrestricted enough as is.</li> |
|
|
</ul> |
|
|
<p style="color: #666; font-style: italic;">Total token count: ~270M Tokens (210M Trainable), over 2 epochs.</p> |
|
|
</div> |
|
|
<div style="background: rgba(255, 255, 255, 0.7); padding: 20px; border-radius: 10px; margin: 15px 0;"> |
|
|
<h2 style="color: #d35d6e; border-bottom: 2px solid #ff9eb5; padding-bottom: 10px;">π¨ Formatting and Samplers</h2> |
|
|
|
|
|
<h3 style="color: #d35d6e; padding-bottom: 10px;">Instruct Format: Llama-3-Instruct</h3> |
|
|
|
|
|
``` |
|
|
<|begin_of_text|><|start_header_id|>system<|end_header_id|> |
|
|
|
|
|
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> |
|
|
|
|
|
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> |
|
|
|
|
|
{output}<|eot_id|> |
|
|
--- |
|
|
Note that newlines are represented within example above |
|
|
``` |
|
|
|
|
|
|
|
|
<h3 style="color: #d35d6e; border-bottom: 2px solid #ff9eb5; padding-bottom: 10px;">β¨ Sampler Recommendations</h3> |
|
|
|
|
|
```yaml |
|
|
temperature: 0.75 |
|
|
min_p: 0.1 |
|
|
Repetition Penalty: 1.1 |
|
|
Presence Penalty: 1.1 |
|
|
``` |
|
|
|
|
|
<h3 style="color: #d35d6e; border-bottom: 2px solid #ff9eb5; padding-bottom: 10px;">βοΈ Training Details</h3> |
|
|
|
|
|
```yaml |
|
|
# Iterations |
|
|
num_epochs: 2 |
|
|
|
|
|
# Batching - Global Batch 4x GPUs Γ Batch 2 Γ 4 Grad_accum = 32 |
|
|
gradient_accumulation_steps: 4 |
|
|
micro_batch_size: 2 |
|
|
|
|
|
# Optimizer |
|
|
optimizer: paged_ademamix_8bit |
|
|
lr_scheduler: cosine |
|
|
learning_rate: 0.00002 |
|
|
max_grad_norm: 1 |
|
|
weight_decay: 0.01 |
|
|
``` |
|
|
</div> |
|
|
|
|
|
<div align="center" style="margin-top: 30px; color: #d35d6e;"> |
|
|
<p>π¦ Thank you for visiting! May the foxes bring you good fortune! πΈ</p> |
|
|
</div> |
|
|
</div> |
|
|
|
|
|
<style> |
|
|
@import url('https://fonts.googleapis.com/css2?family=Noto+Sans+JP:wght@400;500;700&display=swap'); |
|
|
</style> |