--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-7B tags: - generated_from_trainer datasets: - phxdev/creed model-index: - name: outputs/mymodel results: [] --- [Built with Axolotl](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config axolotl version: `0.8.0.dev0` ```yaml adapter: lora base_model: Qwen/Qwen2.5-7B bf16: auto dataset_processes: 32 datasets: - type: completion field: text path: phxdev/creed trust_remote_code: false gradient_accumulation_steps: 1 gradient_checkpointing: false learning_rate: 0.0002 lisa_layers_attribute: model.layers load_best_model_at_end: false load_in_4bit: false load_in_8bit: true lora_alpha: 16 lora_dropout: 0.05 lora_r: 8 lora_target_modules: - q_proj - v_proj - k_proj - o_proj - gate_proj - down_proj - up_proj loraplus_lr_embedding: 1.0e-06 lr_scheduler: cosine max_prompt_len: 512 mean_resizing_embeddings: false micro_batch_size: 16 num_epochs: 5 optimizer: adamw_bnb_8bit output_dir: ./outputs/mymodel pretrain_multipack_attn: true pretrain_multipack_buffer_size: 10000 qlora_sharded_model_loading: false ray_num_workers: 1 resources_per_worker: GPU: 1 sample_packing_bin_size: 200 sample_packing_group_size: 100000 save_only_model: false save_safetensors: true sequence_len: 4096 shuffle_merged_datasets: true skip_prepare_dataset: false strict: false train_on_inputs: false trl: log_completions: false ref_model_mixup_alpha: 0.9 ref_model_sync_steps: 64 sync_ref_model: false use_vllm: false vllm_device: auto vllm_dtype: auto vllm_gpu_memory_utilization: 0.9 use_ray: false val_set_size: 0.0 weight_decay: 0.0 ```

# outputs/mymodel This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the phxdev/creed dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 7 - num_epochs: 5.0 ### Training results ### Framework versions - PEFT 0.14.0 - Transformers 4.49.0 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0