See axolotl config
axolotl version: 0.16.0.dev0
# === Model Configuration ===
base_model: Qwen/Qwen3.5-27B
load_in_8bit: false
load_in_4bit: false
# === Training Setup ===
num_epochs: 2
micro_batch_size: 8
gradient_accumulation_steps: 4
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_r: 64
lora_alpha: 512
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- down_proj
- up_proj
- linear_attn.in_proj_qkv
- linear_attn.in_proj_z
- linear_attn.out_proj
# === Hyperparameter Configuration ===
optimizer: adamw_torch_8bit
learning_rate: 1e-5
lr_scheduler: constant
weight_decay: 0.001
max_grad_norm: 0.1
warmup_ratio: 0.05
cosine_min_lr_ratio: 0.1
# === Data Configuration ===
datasets:
- path: output.parquet
ds_type: parquet
type:
chat_template: tokenizer_default
dataset_prepared_path: last_run_prepared
# === Hardware Optimization ===
gradient_checkpointing: offload
# === Wandb Tracking ===
wandb_project: qwen-27b-seemo
# === Checkpointing ===
saves_per_epoch: 1
# === Advanced Settings ===
output_dir: ./model-output
bf16: auto
flash_attention: true
train_on_inputs: false
group_by_length: false
logging_steps: 1
trust_remote_code: false
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
model-output
This model is a fine-tuned version of Qwen/Qwen3.5-27B on the output.parquet dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 4
- training_steps: 90
Training results
Framework versions
- PEFT 0.18.1
- Transformers 5.3.0
- Pytorch 2.8.0+cu128
- Datasets 4.5.0
- Tokenizers 0.22.2
- Downloads last month
- 14
Model tree for allura-forge/q3527-rpsleemo-adpt-ep2
Base model
Qwen/Qwen3.5-27B