YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Jailbreak-Detector-2-XL β Qwen2.5-0.5B-Instruct (merge + ONNX)
This repo includes a small script to:
- Merge the LoRA
madhurjindal/Jailbreak-Detector-2-XLinto the base modelQwen/Qwen2.5-0.5B-Instructand save the FULL merged weights. - Export the merged model to ONNX (default task:
text-generation-with-past).
Outputs are written under:
models/qwen2.5-0.5b-jb2xl-merged/(merged full HF model)onnx/qwen2.5-0.5b-jb2xl/(ONNX export)
Note: You need internet access the first time to download from Hugging Face.
Quick Start (Docker)
docker build --no-cache -t jb2xl-merge .
docker run --rm -it \
-e HF_HUB_ENABLE_HF_TRANSFER=1 \
-v "$(pwd)":/workspace \
jb2xl-merge \
python scripts/merge_and_export.py \
--base Qwen/Qwen2.5-0.5B-Instruct \
--adapter madhurjindal/Jailbreak-Detector-2-XL \
--out models/qwen2.5-0.5b-jb2xl-merged \
--onnx-out onnx/qwen2.5-0.5b-jb2xl \
--task text-generation-with-past
If you have a private HF token, pass it at runtime (Transformers picks it up automatically):
docker run --rm -it \
-e HUGGINGFACE_HUB_TOKEN=hf_xxx \
-v "$(pwd)":/workspace \
jb2xl-merge \
python scripts/merge_and_export.py
If you see ModuleNotFoundError: No module named 'optimum.exporters.onnx', rebuild the image with --no-cache as shown above, or install the extras inside the container:
docker run --rm -it -v "$(pwd)":/workspace jb2xl-merge \
pip install "optimum[exporters,onnxruntime]>=1.23.3"
Local Python (optional)
If you prefer not to use Docker, install Python 3.10+ and run:
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
python scripts/merge_and_export.py
Arguments
--base Base model (default: Qwen/Qwen2.5-0.5B-Instruct)
--adapter LoRA adapter (default: madhurjindal/Jailbreak-Detector-2-XL)
--out Output dir for merged model (default: models/qwen2.5-0.5b-jb2xl-merged)
--onnx-out Output dir for ONNX (default: onnx/qwen2.5-0.5b-jb2xl)
--task ONNX task: text-generation | text-generation-with-past (default)
--opset ONNX opset (optional)
--dtype Torch dtype for merging load (default: float32)
--no-onnx Skip ONNX export
Notes
- The merged model is saved with
safetensorsformat and includes the tokenizer. - ONNX export uses Optimum and will generate the appropriate graph(s) for the task you choose.
- For very large ONNX files (>2GB), external data format is handled automatically by Optimum.
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support