vllm (pretrained=/root/autodl-tmp/output,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
| Tasks | Version | Filter | n-shot | Metric | Value | Stderr | ||
|---|---|---|---|---|---|---|---|---|
| gsm8k | 3 | flexible-extract | 5 | exact_match | ↑ | 0.848 | ± | 0.0228 |
| strict-match | 5 | exact_match | ↑ | 0.896 | ± | 0.0193 |
vllm (pretrained=/root/autodl-tmp/SauerkrautLM-v2-14b-DPO,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,dtype=bfloat16), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
| Tasks | Version | Filter | n-shot | Metric | Value | Stderr | ||
|---|---|---|---|---|---|---|---|---|
| gsm8k | 3 | flexible-extract | 5 | exact_match | ↑ | 0.832 | ± | 0.0237 |
| strict-match | 5 | exact_match | ↑ | 0.852 | ± | 0.0225 |
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for noneUsername/SauerkrautLM-v2-14b-DPO-W8A8-Dynamic-Per-Token
Base model
Qwen/Qwen2.5-14B
Finetuned
VAGOsolutions/SauerkrautLM-v2-14b-SFT
Finetuned
VAGOsolutions/SauerkrautLM-v2-14b-DPO