LoRA Model for LLaVA OneVision

This is a LoRA (Low-Rank Adaptation) model fine-tuned on top of lmms-lab/llava-onevision-qwen2-0.5b-ov.

Usage

from peft import PeftModel
from llava.model.builder import load_pretrained_model

# Load base model
tokenizer, model, image_processor, context_len = load_pretrained_model(
    "lmms-lab/llava-onevision-qwen2-0.5b-ov",
    None,
    model_name="llava_qwen",
    device_map="auto",
)

# Load LoRA weights
model = PeftModel.from_pretrained(model, "mianaro3/EEE-BENCH-LORA-0.5")
model.eval()

Files

  • adapter_config.json: LoRA configuration
  • adapter_model.safetensors: LoRA weights

Training Details

See trainer_state.json and training_args.bin for full training configuration.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mianaro3/EEE-BENCH-LORA-0.5

Adapter
(5)
this model