Metrics
| PPL | arc_easy | arc_challenge | piqa | winogrande | hellaswag | mmlu | QA Avg |
|---|---|---|---|---|---|---|---|
| 5.47 | 76.30 ± 0.87 | 43.34 ± 1.45 | 77.97 ± 0.97 | 69.22 ± 1.30 | 57.11 ± 0.49 | - | 64.79 |
Training method based on BitDistiller Paper
- License: mit
- Finetuned from: TinyLlama/TinyLlama_v1.1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for BrownianNotion/Llama-2-7b-hf_full_precision_baseline
Base model
TinyLlama/TinyLlama_v1.1