Built with Axolotl

See axolotl config

axolotl version: 0.12.2

# ===== Model =====

base_model: google/gemma-3-4b-it
processor_type: AutoProcessor

chat_template: gemma3

# ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ(๋น„์ „-์ฑ—) ํ•„์ˆ˜ ํ”Œ๋ž˜๊ทธ
skip_prepare_dataset: true
remove_unused_columns: false
sample_packing: false

#shuffle_merged_datasets: false
#shuffle_before_merging_datasets: false   # (๊ธฐ๋ณธ false์ง€๋งŒ ๋ช…์‹œ ์ถ”์ฒœ)

ddp_find_unused_parameters: true


# ===== Data =====
eot_tokens:
  - <end_of_turn>
datasets:
  - path: vlm_data_2025101_1/gemma3-4b-v-KoV_0.0.4.jsonl
    type: chat_template
    field_messages: messages
    split: null

val_set_size: 0.0
dataset_prepared_path:

# ===== Output / Logging =====
output_dir: ./outputs/gemma3-4b-v-KoV_0.0.4.jsonl
logging_steps: 1

# wandb ์—ฐ๋™(์›ํ•˜๋ฉด ๋ณ€๊ฒฝ/์ฃผ์„)
wandb_entity: minkyun1
wandb_project: kisti_vlm_axo
wandb_name: gemma3-4b-v-KoV_0.0.4.jsonl

# ===== LoRA / Quantization =====
#adapter: lora
# LLaVA์—์„œ ์–ธ์–ด๋ชจ๋ธ ์ชฝ ํ”„๋กœ์ ์…˜์—๋งŒ LoRA(์•ˆ์ „ ๊ธฐ๋ณธ๊ฐ’)
#lora_r: 256
#lora_alpha: 512
#lora_dropout: 0.05
#lora_target_modules: "model.language_model.layers.[\\d]+.(mlp|cross_attn|self_attn).(up|down|gate|q|k|v|o)_proj"

# ๋ฉ”๋ชจ๋ฆฌ ์—ฌ์œ  ์ถฉ๋ถ„ํ•˜์ง€๋งŒ, ์‹œ์ž‘์€ 4bit ๋กœ ์•ˆ์ •์ ์œผ๋กœ
load_in_4bit: false
load_in_8bit: false
bf16: true
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
flash_attention: true
eager_attention:

# ===== Optim & Train =====
optimizer: adamw_torch_fused
learning_rate: 4e-5
lr_scheduler: cosine
warmup_ratio: 0.05
weight_decay: 0.01
max_grad_norm: 1.0
seed: 42
sequence_len: 8192 
pad_to_sequence_len: false
excess_length_strategy: drop

# GPU๋‹น ๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜/๋ˆ„์  โ†’ ์œ ํšจ ๋ฐฐ์น˜ = 1 * 8 * 2GPU = 16
micro_batch_size: 1
gradient_accumulation_steps: 16

num_epochs: 5
evals_per_epoch: 1
saves_per_epoch: 1
# save_first_step: true

# ===== Multi-GPU: DeepSpeed (์ถ”์ฒœ) =====
# deepspeed ํ”„๋ฆฌ์…‹์„ ๋ฐ›์•„์„œ ์‚ฌ์šฉ:
#   axolotl fetch deepspeed_configs
# 2ร—A100 80GB + 7B์—๋Š” zero2๊ฐ€ ๋น ๋ฅด๊ณ  ์•ˆ์ •์ 
deepspeed: ds_zero2.json

# ===== ๋””๋ฒ„๊ทธ/์žฌํ˜„์„ฑ(์„ ํƒ) =====
# ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ ๋ฉ€ํ‹ฐํ”„๋กœ์„ธ์Šค๊ฐ€ ๋ฌธ์ œ ์ƒ๊ธฐ๋ฉด 1๋กœ ๋‚ฎ์ถฐ์„œ ์›์ธ ํŒŒ์•…
# dataset_processes: 1

# ===== [๋Œ€์•ˆ] FSDP2 ์„ค์ •(DeepSpeed ๋Œ€์‹  ์“ฐ๊ณ  ์‹ถ์„ ๋•Œ) =====
# fsdp_version: 2
# fsdp_config:
#   offload_params: false
#   cpu_ram_efficient_loading: true
#   auto_wrap_policy: TRANSFORMER_BASED_WRAP
#   transformer_layer_cls_to_wrap: LlamaDecoderLayer
#   state_dict_type: FULL_STATE_DICT
#   reshard_after_forward: true

outputs/gemma3-4b-v-KoV_0.0.4.jsonl

This model is a fine-tuned version of google/gemma-3-4b-it on the vlm_data_2025101_1/gemma3-4b-v-KoV_0.0.4.jsonl dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 128
  • total_eval_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 131
  • training_steps: 2638

Training results

Framework versions

  • Transformers 4.55.2
  • Pytorch 2.6.0+cu124
  • Datasets 4.0.0
  • Tokenizers 0.21.4
Downloads last month
4
Safetensors
Model size
4B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for jjakgui/gemma3-4b-v-KoV_0.0.4_ep_5

Finetuned
(400)
this model