Qwen2.5-7B-Instruct PubMedQA QLoRA Layer11-only (ctx=2048)
Adapter type: LoRA / QLoRA
Train data: PubMedQA (labeled + artificial)
Context length: 2048
Date: 2025-10-19
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = "Qwen/Qwen2.5-7B-Instruct"
adapter = "<your-repo-id>" # 例: katsukiono/Qwen2.5-7B-Instruct-pubmed-qlora-layer11-only-2048
tok = AutoTokenizer.from_pretrained(base, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
base, device_map="auto", torch_dtype="auto", trust_remote_code=True
)
model = PeftModel.from_pretrained(model, adapter)
model.eval()
This repo contains a PEFT adapter (not full weights).
- Downloads last month
- 4