LoRA Model for LLaVA OneVision
This is a LoRA (Low-Rank Adaptation) model fine-tuned on top of lmms-lab/llava-onevision-qwen2-0.5b-ov.
Usage
from peft import PeftModel
from llava.model.builder import load_pretrained_model
# Load base model
tokenizer, model, image_processor, context_len = load_pretrained_model(
"lmms-lab/llava-onevision-qwen2-0.5b-ov",
None,
model_name="llava_qwen",
device_map="auto",
)
# Load LoRA weights
model = PeftModel.from_pretrained(model, "mianaro3/EEE-BENCH-LORA-0.5")
model.eval()
Files
adapter_config.json: LoRA configurationadapter_model.safetensors: LoRA weights
Training Details
See trainer_state.json and training_args.bin for full training configuration.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for mianaro3/EEE-BENCH-LORA-0.5
Base model
lmms-lab/llava-onevision-qwen2-0.5b-ov