GPT-2 + IBCE Low-Rank (trust_remote_code)
What it is. A runtime wrapper that keeps GPT-2โs fused attention kernels and swaps only QKV/PROJ linears to low-rank factors.
No weights are stored here: on load it fetches openai-community/gpt2 and patches it at runtime. Because this uses trust_remote_code, HF Inference API/Serverless is not supported; run in your own environment or notebook.
Headline results (A100, FP16, greedy): ~1.8โ2.0x tokens/sec at ~0.8% perplexity delta on WikiText-2 (raw).
Quick start (pin a safe revision)
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch as th
repo = "jackal79/gpt2-ibce-lowrank-192"
rev = "a71015d1333757bedaaa2437333cd44415076218" # pinned revision
tok = AutoTokenizer.from_pretrained(repo, trust_remote_code=True, revision=rev)
mdl = AutoModelForCausalLM.from_pretrained(
repo, trust_remote_code=True, revision=rev, dtype=th.float16
).cuda().eval()
x = tok("Hello from IBCE low-rank!", return_tensors="pt").to("cuda")
y = mdl.generate(**x, max_new_tokens=64, do_sample=False)
print(tok.decode(y[0], skip_special_tokens=True))
## Licensing
This repository is released under an **Evaluation License** for non-production, non-hosted use.
Commercial production or hosted use requires a separate license.
Contact **[email protected]**.
The full license text is in [`LICENSE.txt`](https://huggingface.co/jackal79/gpt2-ibce-lowrank-192/blob/main/LICENSE.txt).
- Downloads last month
- 1
Model tree for jackal79/gpt2-ibce-lowrank-192
Base model
openai-community/gpt2Evaluation results
- Perplexity (GPT-2 baseline) on WikiText-2 (raw)self-reported51.641
- Perplexity (IBCE) on WikiText-2 (raw)self-reported52.068
- Tokens per second (baseline, A100 FP16, 512 tok) on WikiText-2 (raw)self-reported102.000
- Tokens per second (IBCE, A100 FP16, 512 tok) on WikiText-2 (raw)self-reported190.000