Tülu3 8B aligned with DPO on UltraFeedback with β=0.01

This repo contains LoRA adapter created by aligning Tülu3 8B on the UltraFeedback Binarized dataset using Direct Preference Optimization (DPO). It was trained as a series of models for studying DPO alignment.

Model details

See the base model card for usage and chat template details.

Training hyperparameters

  • Epochs: 1
  • Batch size: 8
  • Learning rate: 1e-05
  • Learning rate scheduler: cosine
  • Learning rate warmup ratio: 0.1
  • Gradient accumulation: 2
  • LoRA:
    • rank: 64
    • alpha: 64
    • dropout: 0.05
    • target modules: [q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj]

License

This adapter is released under Meta's Llama 3.1 Community License Agreement. Llama 3.1 is © Meta Platforms, Inc.

Citation

If this work was helpful, please cite:

TBA
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jmajkutewicz/Llama-3.1-Tulu-3-8B-DPO_ultrafeedback

Adapter
(6)
this model

Dataset used to train jmajkutewicz/Llama-3.1-Tulu-3-8B-DPO_ultrafeedback

Collection including jmajkutewicz/Llama-3.1-Tulu-3-8B-DPO_ultrafeedback