PyTorch
English
llama
hamishivi commited on
Commit
e202198
·
verified ·
1 Parent(s): 5ddb30d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -10,7 +10,7 @@ license: llama3.1
10
  # Llama 3.1 RDS+ Tulu 3 Multitask 326k
11
 
12
  This is a model trained on 939k samples selected by RDS+ from the [Tulu 3 unfiltered dataset](https://huggingface.co/datasets/hamishivi/tulu-3-unfiltered).
13
- For more details, please see the paper [Practical Large-Scale Data Selection for Instruction Tuning](todo) and [associated codebase](https://github.com/hamishivi/automated-instruction-selection).
14
 
15
  <center>
16
  <img src="https://huggingface.co/hamishivi/tulu-2-multitask-rrmax-326k-sft/resolve/main/image.png" alt="Practical Large-Scale Data Selection for Instruction Tuning logo" width="200px"/>
@@ -31,7 +31,7 @@ For more details, please see the paper [Practical Large-Scale Data Selection for
31
 
32
  ## Results
33
 
34
- For more results and analysis, please see [our paper](todo).
35
 
36
  | Method | MMLU | GSM8k | BBH | TydiQA | Codex | Squad | AlpacaEval | Average |
37
  |-----------------------|------:|------:|-----:|-------:|------:|------:|-----------:|--------:|
@@ -77,7 +77,8 @@ If you find this model or data is useful in your work, please cite it with:
77
  title={{Practical Large-Scale Data Selection for Instruction Tuning}},
78
  author={{Hamish Ivison and Muru Zhang and Faeze Brahman and Pang Wei Koh and Pradeep Dasigi}}
79
  year={2025},
80
- eprint={todo},
 
81
  archivePrefix={arXiv},
82
  primaryClass={cs.CL}
83
  }
 
10
  # Llama 3.1 RDS+ Tulu 3 Multitask 326k
11
 
12
  This is a model trained on 939k samples selected by RDS+ from the [Tulu 3 unfiltered dataset](https://huggingface.co/datasets/hamishivi/tulu-3-unfiltered).
13
+ For more details, please see the paper [Practical Large-Scale Data Selection for Instruction Tuning](https://arxiv.org/abs/2503.01807) and [associated codebase](https://github.com/hamishivi/automated-instruction-selection).
14
 
15
  <center>
16
  <img src="https://huggingface.co/hamishivi/tulu-2-multitask-rrmax-326k-sft/resolve/main/image.png" alt="Practical Large-Scale Data Selection for Instruction Tuning logo" width="200px"/>
 
31
 
32
  ## Results
33
 
34
+ For more results and analysis, please see [our paper](https://arxiv.org/abs/2503.01807).
35
 
36
  | Method | MMLU | GSM8k | BBH | TydiQA | Codex | Squad | AlpacaEval | Average |
37
  |-----------------------|------:|------:|-----:|-------:|------:|------:|-----------:|--------:|
 
77
  title={{Practical Large-Scale Data Selection for Instruction Tuning}},
78
  author={{Hamish Ivison and Muru Zhang and Faeze Brahman and Pang Wei Koh and Pradeep Dasigi}}
79
  year={2025},
80
+ url={https://arxiv.org/abs/2503.01807},
81
+ eprint={2503.01807},
82
  archivePrefix={arXiv},
83
  primaryClass={cs.CL}
84
  }