See axolotl config
axolotl version: 0.12.2
# ===== Model =====
base_model: google/gemma-3-4b-it
processor_type: AutoProcessor
chat_template: gemma3
# ๋ฉํฐ๋ชจ๋ฌ(๋น์ -์ฑ) ํ์ ํ๋๊ทธ
skip_prepare_dataset: true
remove_unused_columns: false
sample_packing: false
#shuffle_merged_datasets: false
#shuffle_before_merging_datasets: false # (๊ธฐ๋ณธ false์ง๋ง ๋ช
์ ์ถ์ฒ)
ddp_find_unused_parameters: true
# ===== Data =====
eot_tokens:
- <end_of_turn>
datasets:
- path: vlm_data_2025101_1/gemma3-4b-v-KoV_0.0.4.jsonl
type: chat_template
field_messages: messages
split: null
val_set_size: 0.0
dataset_prepared_path:
# ===== Output / Logging =====
output_dir: ./outputs/gemma3-4b-v-KoV_0.0.4.jsonl
logging_steps: 1
# wandb ์ฐ๋(์ํ๋ฉด ๋ณ๊ฒฝ/์ฃผ์)
wandb_entity: minkyun1
wandb_project: kisti_vlm_axo
wandb_name: gemma3-4b-v-KoV_0.0.4.jsonl
# ===== LoRA / Quantization =====
#adapter: lora
# LLaVA์์ ์ธ์ด๋ชจ๋ธ ์ชฝ ํ๋ก์ ์
์๋ง LoRA(์์ ๊ธฐ๋ณธ๊ฐ)
#lora_r: 256
#lora_alpha: 512
#lora_dropout: 0.05
#lora_target_modules: "model.language_model.layers.[\\d]+.(mlp|cross_attn|self_attn).(up|down|gate|q|k|v|o)_proj"
# ๋ฉ๋ชจ๋ฆฌ ์ฌ์ ์ถฉ๋ถํ์ง๋ง, ์์์ 4bit ๋ก ์์ ์ ์ผ๋ก
load_in_4bit: false
load_in_8bit: false
bf16: true
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
flash_attention: true
eager_attention:
# ===== Optim & Train =====
optimizer: adamw_torch_fused
learning_rate: 4e-5
lr_scheduler: cosine
warmup_ratio: 0.05
weight_decay: 0.01
max_grad_norm: 1.0
seed: 42
sequence_len: 8192
pad_to_sequence_len: false
excess_length_strategy: drop
# GPU๋น ๋ง์ดํฌ๋ก ๋ฐฐ์น/๋์ โ ์ ํจ ๋ฐฐ์น = 1 * 8 * 2GPU = 16
micro_batch_size: 1
gradient_accumulation_steps: 16
num_epochs: 5
evals_per_epoch: 1
saves_per_epoch: 1
# save_first_step: true
# ===== Multi-GPU: DeepSpeed (์ถ์ฒ) =====
# deepspeed ํ๋ฆฌ์
์ ๋ฐ์์ ์ฌ์ฉ:
# axolotl fetch deepspeed_configs
# 2รA100 80GB + 7B์๋ zero2๊ฐ ๋น ๋ฅด๊ณ ์์ ์
deepspeed: ds_zero2.json
# ===== ๋๋ฒ๊ทธ/์ฌํ์ฑ(์ ํ) =====
# ๋ฐ์ดํฐ ์ ์ฒ๋ฆฌ ๋ฉํฐํ๋ก์ธ์ค๊ฐ ๋ฌธ์ ์๊ธฐ๋ฉด 1๋ก ๋ฎ์ถฐ์ ์์ธ ํ์
# dataset_processes: 1
# ===== [๋์] FSDP2 ์ค์ (DeepSpeed ๋์ ์ฐ๊ณ ์ถ์ ๋) =====
# fsdp_version: 2
# fsdp_config:
# offload_params: false
# cpu_ram_efficient_loading: true
# auto_wrap_policy: TRANSFORMER_BASED_WRAP
# transformer_layer_cls_to_wrap: LlamaDecoderLayer
# state_dict_type: FULL_STATE_DICT
# reshard_after_forward: true
outputs/gemma3-4b-v-KoV_0.0.4.jsonl
This model is a fine-tuned version of google/gemma-3-4b-it on the vlm_data_2025101_1/gemma3-4b-v-KoV_0.0.4.jsonl dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 131
- training_steps: 2638
Training results
Framework versions
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
- Downloads last month
- 4