Update README.md
Browse files
README.md
CHANGED
|
@@ -1,51 +1,217 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
base_model: AlicanKiraz0/Qwen3-14B-BaronLLM-v2
|
| 4 |
-
tags:
|
| 5 |
-
- llama-cpp
|
| 6 |
-
- gguf-my-repo
|
| 7 |
-
---
|
| 8 |
|
| 9 |
-
|
| 10 |
-
This model was converted to GGUF format from [`AlicanKiraz0/Qwen3-14B-BaronLLM-v2`](https://huggingface.co/AlicanKiraz0/Qwen3-14B-BaronLLM-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 11 |
-
Refer to the [original model card](https://huggingface.co/AlicanKiraz0/Qwen3-14B-BaronLLM-v2) for more details on the model.
|
| 12 |
|
| 13 |
-
|
| 14 |
-
|
| 15 |
|
| 16 |
-
|
| 17 |
-
brew install llama.cpp
|
| 18 |
|
| 19 |
-
|
| 20 |
-
Invoke the llama.cpp server or the CLI.
|
| 21 |
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
```
|
| 26 |
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
|
|
|
| 31 |
|
| 32 |
-
|
| 33 |
|
| 34 |
-
|
| 35 |
-
```
|
| 36 |
-
git clone https://github.com/ggerganov/llama.cpp
|
| 37 |
-
```
|
| 38 |
|
| 39 |
-
|
| 40 |
-
```
|
| 41 |
-
cd llama.cpp && LLAMA_CURL=1 make
|
| 42 |
-
```
|
| 43 |
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
```
|
| 48 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |
```
|
| 50 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
```
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
base_model: AlicanKiraz0/Qwen3-14B-BaronLLM-v2
|
| 4 |
+
tags:
|
| 5 |
+
- llama-cpp
|
| 6 |
+
- gguf-my-repo
|
| 7 |
+
---
|
| 8 |
|
| 9 |
+
Based on the information you've provided, here's a comprehensive README.md for BaronLLM v2.0:
|
|
|
|
|
|
|
| 10 |
|
| 11 |
+
```markdown
|
| 12 |
+
# BaronLLM v2.0 - State-of-the-Art Offensive Security AI Model
|
| 13 |
|
| 14 |
+
<img src="https://huggingface.co/AlicanKiraz0/BaronLLM-llama3.1-v1/resolve/main/BaronLLM.png" width="700" />
|
|
|
|
| 15 |
|
| 16 |
+
**Developed by Alican Kiraz | Trendyol Group Security Team**
|
|
|
|
| 17 |
|
| 18 |
+
[](https://tr.linkedin.com/in/alican-kiraz)
|
| 19 |
+

|
| 20 |
+

|
|
|
|
| 21 |
|
| 22 |
+
**Links:**
|
| 23 |
+
- Medium: https://alican-kiraz1.medium.com/
|
| 24 |
+
- LinkedIn: https://tr.linkedin.com/in/alican-kiraz
|
| 25 |
+
- X: https://x.com/AlicanKiraz0
|
| 26 |
+
- YouTube: https://youtube.com/@alicankiraz0
|
| 27 |
|
| 28 |
+
> **BaronLLM v2.0** is a state-of-the-art large language model fine-tuned specifically for *offensive cybersecurity research & adversarial simulation*, achieving breakthrough performance on industry benchmarks while maintaining safety constraints.
|
| 29 |
|
| 30 |
+
---
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
+
## π Benchmark Achievements
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
+
### CS-Eval Global Rankings
|
| 35 |
+
- **13th place** globally among all cybersecurity AI models
|
| 36 |
+
- **4th place** among publicly released models in its parameter class
|
| 37 |
+
- Comprehensive average score: **80.93%**
|
| 38 |
+
|
| 39 |
+
### SecBench Performance Metrics
|
| 40 |
+
|
| 41 |
+
| Category | BaronLLM v2.0 | vs. Industry Leaders |
|
| 42 |
+
|----------|---------------|----------------------|
|
| 43 |
+
| **Standards & Regulations** | **87.2%** | Only 4.3 points behind Deepseek-v3 (671B) - 48Γ smaller! |
|
| 44 |
+
| **Application Security** | **85.5%** | Just 4.8 points behind GPT-4o (175B) - 12.5Γ more compact! |
|
| 45 |
+
| **Endpoint & Host** | **88.1%** | Only 1.4 points behind o1-preview (200B) - 14Γ higher efficiency! |
|
| 46 |
+
| **MCQ Overall** | **86.9%** | Within 2-6% of premium models! |
|
| 47 |
+
|
| 48 |
+
### Performance Improvements (v1 β v2)
|
| 49 |
+
- Base model performance boosted by **~1.5x** on CyberSec-Eval benchmarks
|
| 50 |
+
- Enhanced with Causal Reasoning and Chain-of-Thought (CoT) capabilities
|
| 51 |
+
- Outperforms Qwen3-14B base model significantly across all security metrics
|
| 52 |
+
|
| 53 |
+
---
|
| 54 |
+
|
| 55 |
+
## β¨ Key Features
|
| 56 |
+
|
| 57 |
+
| Capability | Details |
|
| 58 |
+
|------------|---------|
|
| 59 |
+
| **Adversary Simulation** | Generates full ATT&CK chains, C2 playbooks, and social-engineering scenarios |
|
| 60 |
+
| **Exploit Reasoning** | Step-by-step vulnerability analysis with code-level explanations and PoC generation |
|
| 61 |
+
| **Payload Optimization** | Advanced obfuscation techniques and multi-stage payload logic |
|
| 62 |
+
| **Threat Intelligence** | Log analysis, artifact triage, and attack pattern recognition |
|
| 63 |
+
| **Cloud-Native Security** | Kubernetes, serverless, and multi-cloud environment testing |
|
| 64 |
+
| **Emerging Threats** | AI/ML security, quantum computing risks, and zero-day research |
|
| 65 |
+
|
| 66 |
+
---
|
| 67 |
+
|
| 68 |
+
## ποΈ Model Architecture
|
| 69 |
+
|
| 70 |
+
| Specification | Details |
|
| 71 |
+
|--------------|---------|
|
| 72 |
+
| **Base Model** | Qwen3-14B |
|
| 73 |
+
| **Parameters** | 14 Billion |
|
| 74 |
+
| **Context Length** | 8,192 tokens |
|
| 75 |
+
| **Training Data** | 53,202 curated examples |
|
| 76 |
+
| **Domains Covered** | 200+ specialized cybersecurity areas |
|
| 77 |
+
| **Languages** | English |
|
| 78 |
+
| **Fine-tuning Method** | Instruction tuning with CoT |
|
| 79 |
+
|
| 80 |
+
---
|
| 81 |
+
|
| 82 |
+
## π Training Dataset
|
| 83 |
+
|
| 84 |
+
**53,202** meticulously curated instruction-tuning examples covering **200+ specialized cybersecurity domains**:
|
| 85 |
+
|
| 86 |
+
### Topic Distribution
|
| 87 |
+
- Cloud Security & DevSecOps: 18.5%
|
| 88 |
+
- Threat Intelligence & Hunting: 16.2%
|
| 89 |
+
- Incident Response & Forensics: 14.8%
|
| 90 |
+
- AI/ML Security: 12.3%
|
| 91 |
+
- Network & Protocol Security: 11.7%
|
| 92 |
+
- Identity & Access Management: 9.4%
|
| 93 |
+
- Emerging Technologies: 8.6%
|
| 94 |
+
- Platform-Specific Security: 5.3%
|
| 95 |
+
- Compliance & Governance: 3.2%
|
| 96 |
+
|
| 97 |
+
### Data Sources (Curated & Redacted)
|
| 98 |
+
- Public vulnerability databases (NVD/CVE, VulnDB)
|
| 99 |
+
- Security research papers (Project Zero, PortSwigger, NCC Group)
|
| 100 |
+
- Industry threat reports (with permissions)
|
| 101 |
+
- Synthetic ATT&CK chains (auto-generated + human-vetted)
|
| 102 |
+
- Conference proceedings (BlackHat, DEF CON, RSA)
|
| 103 |
+
|
| 104 |
+
> **Note:** No copyrighted exploit code or proprietary malware datasets were used.
|
| 105 |
+
> Dataset filtering removed raw shellcode/binary payloads.
|
| 106 |
+
|
| 107 |
+
---
|
| 108 |
+
|
| 109 |
+
## π Usage & Access
|
| 110 |
+
|
| 111 |
+
### Availability
|
| 112 |
+
Due to the sensitive nature of offensive security capabilities, BaronLLM v2.0 is available through:
|
| 113 |
+
- **Invite-only access** for verified cybersecurity professionals
|
| 114 |
+
- **Academic partnerships** with research institutions
|
| 115 |
+
- **Enterprise licensing** for authorized security teams
|
| 116 |
+
|
| 117 |
+
To request access, please contact with your professional credentials and use case.
|
| 118 |
+
|
| 119 |
+
### Quick Start (For Authorized Users)
|
| 120 |
+
```python
|
| 121 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 122 |
+
|
| 123 |
+
model_id = "AlicanKiraz/BaronLLM-v2.0" # Requires authentication
|
| 124 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True)
|
| 125 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 126 |
+
model_id,
|
| 127 |
+
torch_dtype="auto",
|
| 128 |
+
device_map="auto",
|
| 129 |
+
)
|
| 130 |
+
|
| 131 |
+
def generate(prompt, **kwargs):
|
| 132 |
+
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
| 133 |
+
output = model.generate(**inputs, max_new_tokens=512, **kwargs)
|
| 134 |
+
return tokenizer.decode(output[0], skip_special_tokens=True)
|
| 135 |
+
|
| 136 |
+
# Example usage
|
| 137 |
+
print(generate("Analyze the exploitability of CVE-2024-45721 in a Kubernetes cluster"))
|
| 138 |
```
|
| 139 |
+
|
| 140 |
+
---
|
| 141 |
+
|
| 142 |
+
## π Prompting Best Practices
|
| 143 |
+
|
| 144 |
+
| Objective | Template | Parameters |
|
| 145 |
+
|-----------|----------|------------|
|
| 146 |
+
| **Exploit Analysis** | `ROLE: Senior Pentester\nOBJECTIVE: Analyze CVE-XXXX...` | `temperature=0.3, top_p=0.9` |
|
| 147 |
+
| **Red Team Planning** | `Generate ATT&CK chain for [target environment]...` | `temperature=0.5, top_p=0.95` |
|
| 148 |
+
| **Threat Hunting** | `Identify C2 patterns in [log type]...` | `temperature=0.2, top_p=0.85` |
|
| 149 |
+
| **Incident Response** | `Create response playbook for [threat scenario]...` | `temperature=0.4, top_p=0.9` |
|
| 150 |
+
|
| 151 |
+
---
|
| 152 |
+
|
| 153 |
+
## π‘οΈ Safety & Alignment
|
| 154 |
+
|
| 155 |
+
### Ethical Framework
|
| 156 |
+
- **Policy Gradient RLHF** with security domain experts
|
| 157 |
+
- **OpenAI/Anthropic-style policies** preventing malicious misuse
|
| 158 |
+
- **Continuous red-teaming** via SecEval v0.3
|
| 159 |
+
- **Dual-use prevention** mechanisms
|
| 160 |
+
|
| 161 |
+
### Responsible Disclosure
|
| 162 |
+
- Model capabilities are documented transparently
|
| 163 |
+
- Access restricted to verified professionals
|
| 164 |
+
- Usage monitoring for compliance
|
| 165 |
+
- Regular security audits
|
| 166 |
+
|
| 167 |
+
---
|
| 168 |
+
|
| 169 |
+
## π Academic Publication
|
| 170 |
+
|
| 171 |
+
The technical paper detailing BaronLLM v2.0's architecture, training methodology, and benchmark results will be available on arXiv within one week.
|
| 172 |
+
|
| 173 |
+
**Citation (Preprint - Coming Soon):**
|
| 174 |
+
```bibtex
|
| 175 |
+
@article{kiraz2025baronllm,
|
| 176 |
+
title={BaronLLM v2.0: State-of-the-Art Offensive Security Language Model},
|
| 177 |
+
author={Kiraz, Alican},
|
| 178 |
+
journal={arXiv preprint arXiv:2025.XXXXX},
|
| 179 |
+
year={2025}
|
| 180 |
+
}
|
| 181 |
```
|
| 182 |
+
|
| 183 |
+
---
|
| 184 |
+
|
| 185 |
+
## π€ Contributing & Support
|
| 186 |
+
|
| 187 |
+
BaronLLM was originally developed to support the Trendyol Group Security Team and has evolved into a state-of-the-art offensive security AI model. We welcome collaboration from the security community:
|
| 188 |
+
|
| 189 |
+
- **Bug Reports**: Via GitHub Issues
|
| 190 |
+
- **Feature Requests**: Through community discussions
|
| 191 |
+
- **Research Collaboration**: Contact for academic partnerships
|
| 192 |
+
|
| 193 |
+
---
|
| 194 |
+
|
| 195 |
+
## βοΈ License & Disclaimer
|
| 196 |
+
|
| 197 |
+
**License:** Apache 2.0 (Model weights require separate authorization)
|
| 198 |
+
|
| 199 |
+
**Important:** This model is designed for authorized security testing and research only. Users must comply with all applicable laws and obtain proper authorization before conducting any security assessments. The developers assume no liability for misuse.
|
| 200 |
+
|
| 201 |
+
---
|
| 202 |
+
|
| 203 |
+
## π Acknowledgments
|
| 204 |
+
|
| 205 |
+
Special thanks to:
|
| 206 |
+
- Trendyol Group Security Team
|
| 207 |
+
- The open-source security community
|
| 208 |
+
- Academic research partners
|
| 209 |
+
- All contributors and testers
|
| 210 |
+
|
| 211 |
+
---
|
| 212 |
+
|
| 213 |
+
*"Those who shed light on others do not remain in darkness..."*
|
| 214 |
+
|
| 215 |
+
**This project does not pursue any profit.**
|
| 216 |
```
|
| 217 |
+
|