SRE Agent - Qwen2.5-Coder-7B
Fine-tuned for SRE tasks. 80.4% performance across 18 SRE scenarios.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-Coder-7B-Instruct",
device_map="auto",
trust_remote_code=True
)
model = PeftModel.from_pretrained(base_model, "abdelrahman179/sre-agent-qwen2.5-7b")
tokenizer = AutoTokenizer.from_pretrained("abdelrahman179/sre-agent-qwen2.5-7b")
Performance
- Security & RBAC: 95%
- Networking: 90.7%
- Node Issues: 87.3%
- Overall: 80.4%
Author: AbdElrahman I.Zaki
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support