Metrics
| PPL | arc_easy | arc_challenge | piqa | winogrande | hellaswag | mmlu | QA Avg |
|---|---|---|---|---|---|---|---|
| 17.18 | 45.16 ± 1.02 | 21.84 ± 1.21 | 63.22 ± 1.13 | 51.78 ± 1.40 | 34.14 ± 0.47 | - | 43.23 |
Training method based on BitDistiller Paper
- License: mit
- Finetuned from: TinyLlama/TinyLlama_v1.1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for acoleman/2-bit-baseline
Base model
TinyLlama/TinyLlama_v1.1