venue
stringclasses
9 values
original_openreview_id
stringlengths
8
17
revision_openreview_id
stringlengths
8
11
content
stringlengths
2
620k
time
stringdate
2016-11-04 05:38:56
2025-05-23 04:52:50
ICLR.cc/2025/Conference
QWXvl2dylM
N1nFXmWhRR
[{'section': '12 (cid:0)J(π1', 'after_section': '12 (cid:0)J(π1', 'context_after': 'The set of all such maps forms a group, which we denote by Ψ. Given ϕ ∈ Ψ, we slightly abuse notation and define ϕ(s) := ϕS (s), ϕ(ai) := ϕAi(ai) and ϕ(oi) := ϕOi(oi), for s ∈ S, ai ∈ Ai and oi ∈ Oi. Given a joint AOH τt = (τ 1 ', 'paragraph_idx': 14, 'before_section': '12 (cid:0)J(π1', 'context_before': 'We consider symmetries that can be expressed as maps ϕ = (ϕS , ϕA, ϕO), consisting of bijective : Oi → Oi, i = 1, ..., n, in the sense that maps ϕS : S → S, ϕAi ', 'modified_lines': 'ϕA(a) = (ϕA1(a1), ..., ϕAn(an)), ϕO(o) = (ϕO1(o1), ..., ϕOn (on)), for all a ∈ A and o ∈ O. ', 'original_lines': 'ϕA(a) = (ϕA1 (a1), ..., (ϕAn (an)), ϕO(o) = (ϕO1(o1), ..., (ϕOn (on)), for all a ∈ A and o ∈ O. ', 'after_paragraph_idx': 14, 'before_paragraph_idx': 14}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Definition 1 (Dec-POMDP Symmetries). A map ϕ ∈ Ψ is called a Dec-POMDP symmetry if for all (st, at, st+1, ot+1) ∈ S × A × S × O it holds that ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Any subgroup Φ ⊂ Ψ partitions the space of joint policies into disjoint equivalence classes: given a joint policy π, we define its equivalence class [π] := {ϕ(π) : ϕ ∈ Φ}. ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': '4 EXPERIMENTS', 'context_after': 'policies using their learned symmetries. ', 'paragraph_idx': 41, 'before_section': '4 EXPERIMENTS', 'context_before': 'For ZSC, we train a population of 5 IPPO (Yu et al., 2022) policies as a SP baseline, where each policy uses an RNN coupled with a CNN to process the observations. The population of ER sym- metry agents each train k = 12 IPPO SP policies, to then use Equation 11 / Algorithm 3 to obtain ', 'modified_lines': 'l = 16 ER symmetries. Each agent trains m = 3 OPΦER ', 'original_lines': 'l = 16 ER symmetries. Each agent trains m = 2 OPΦER ', 'after_paragraph_idx': 41, 'before_paragraph_idx': 41}, {'section': 'Abstract', 'after_section': None, 'context_after': 'and deploys the one with the highest return. Symmetries are trained via Equation 10 by randomly selecting 64 fixed transpositions on the local action space, with the same transpositions fixed for both local policies to simplify the search in the symmetric game Hanabi. ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'access to all Dec-POMDP symmetries constrained by the OP objective (Equation 4). We also train a population of ER symmetry agents that independently discover ER symmetries for the OP objective. Each ER agent uses k = 6 seeds to learn symmetries and saves the top l = 11 that best preserve ', 'modified_lines': 'expected return. Every population consists of 5 agents, where each agent trains m = 5 policies ', 'original_lines': 'expected return. Every population consists of 5 agents, where each agent trains m = 3 policies ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}]
2025-03-30 18:18:16
ICLR.cc/2025/Conference
gvYyobyo3D
q6iNuk4AY5
[]
2024-11-25 17:50:04
ICLR.cc/2025/Conference
q6iNuk4AY5
sAndr66PV9
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'calibration data synthesis strategy to construct feasible calibration data. Experi- mental results on recent strong open-source LLMs (e.g., DCLM, and LLaMA-3) show that the proposed strategy can enhance the performance of strong pruning ', 'modified_lines': 'methods (e.g., Wanda, DSnoT, OWL) by a large margin (up to 2.68%)1. ', 'original_lines': 'methods (e.g., Wanda, DSnoT, OWL) by a large margin (up to 2.68%). ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'methods require iterative training, which is costly and time-consuming for LLMs with billions of parameters. As a result, post-training pruning that does not require iterative training has become the preferred approach for pruning LLMs. ', 'paragraph_idx': 4, 'before_section': '1 INTRODUCTION', 'context_before': 'sparse training (Lee et al., 2019; Frankle & Carbin, 2019; Yuan et al., 2021; Lasby et al., 2024) or pruning-aware training (Sanh et al., 2020; Lagunas et al., 2021; Jiang et al., 2023) methods, it can achieve performance comparable to dense models with a high sparsity ratio (≥70%). However, these ', 'modified_lines': ' ∗ Corresponding author. 1Code is available at https://github.com/Dereck0602/calibration_data 1 Published as a conference paper at ICLR 2025 (a) Peformance differences of repre- sentative pruning methods with the commonly-used C4 calibration data. (b) Performance differences of vari- ous calibration data on SparseGPT. (c) Method differ- ences v.s. data dif- ferences. Figure 1: The effects of pruning methods and calibration data on commonsense reasoning tasks. ', 'original_lines': '', 'after_paragraph_idx': 5, 'before_paragraph_idx': 4}, {'section': 'Abstract', 'after_section': None, 'context_after': 'magnitudes and the L2 norm of the corresponding input activations. Dong et al. (2024) utilize the genetic algorithm to search for the optimal combination of information from magnitude, activation, and gradient as an importance metric. Overall, current advanced parameter importance metrics rely ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'in post-training pruning with over 20% sparsity. Therefore, they use a small amount of calibration data to compute the inverse Hessian matrix, estimating parameter importance through second-order gradient information. Sun et al. (2024) propose a simpler method by using the product of weight ', 'modified_lines': '', 'original_lines': ' 1 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 Under review as a conference paper at ICLR 2025 (a) Peformance differences of repre- sentative pruning methods with the commonly-used C4 calibration data. (b) Performance differences of vari- ous calibration data on SparseGPT. (c) Method differ- ences v.s. data dif- ferences. Figure 1: The effects of pruning methods and calibration data on commonsense reasoning tasks. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Wanda (Sun et al., 2024) evaluates weight importance by combining their magnitudes with input activations without requiring backpropagation. Zhang et al. (2024c) propose the relative importance and activation metric (RIA), which integrates weight, input, and output activation. They also utilize ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ture activation. For post-training pruning, to optimize the objective, OBC (Frantar & Alistarh, 2022) and SparseGPT (Frantar & Alistarh, 2023) utilize second-order gradient information to measure pa- rameter importance and propose an efficient algorithm for computing the inverse Hessian matrix. ', 'modified_lines': '', 'original_lines': " 2 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 0HWKRGGLII'DWDGLII Under review as a conference paper at ICLR 2025 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 ", 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.1 EXPERIMENTAL DETAILS', 'after_section': '3.1 EXPERIMENTAL DETAILS', 'context_after': 'and conduct post-training pruning with different calibration data on it. Post-training Pruning Methods We choose three competitive and representative post-training pruning methods for evaluation: Wanda (Sun et al., 2024), DSnoT (Zhang et al., 2024d) and OWL (Yin et al., 2024). These methods apply to both unstructured and semi-structured pruning. Calibration Data We consider various data sources to be calibration data. Following the main- stream works, the calibration data sources are all from the unlabeled pre-trained corpus: multilingual web text filtered from Common Crawl. We sample from the English training set. English version until 2023-11-01. corpus with diverse sources, including C4, ArXiv, GitHub, Books, etc. • DCLM (Li et al., 2024) is the pre-training data of DCLM-7B model. It includes 2.6T tokens ', 'paragraph_idx': 13, 'before_section': '3.1 EXPERIMENTAL DETAILS', 'context_before': 'Dense Model To study the impact of data from different sources on post-training pruning methods, we need a comprehensive knowledge of the data used in model training. We select the powerful and ', 'modified_lines': 'fully open-source LLM (including training data), DCLM-7B2 (Li et al., 2024), as the dense model 2https://huggingface.co/apple/DCLM-7B 3 Published as a conference paper at ICLR 2025 (a) sparsity ratio (b) sparsity type Figure 2: Pruning performance range (M ax.-M in.) of different datasets (C4, Wikipedia, Slimpa- jama, DCLM) under various sparsity ratios (a) and sparsity types (b) on Wanda. • C4 (Raffel et al., 2020)3 is a widely used calibration data source, consisting of a large amount of • Wikipedia4 is a source of high-quality encyclopedic text. We use the first shard of the cleaned • Slimpajama5 is a cleaned and deduplicated version of RedPajama. It is a high-quality pre-training extracted from Common Crawl. We sample from a subset6 of the DCLM. ', 'original_lines': 'fully open-source LLM (including training data), DCLM-7B1 (Li et al., 2024), as the dense model • C4 (Raffel et al., 2020)2 is a widely used calibration data source, consisting of a large amount of 1https://huggingface.co/apple/DCLM-7B 2https://huggingface.co/datasets/allenai/c4 3 Under review as a conference paper at ICLR 2025 (a) sparsity ratio (b) sparsity type Figure 2: Pruning performance range (M ax.-M in.) of different datasets (C4, Wikipedia, Slimpa- jama, DCLM) under various sparsity ratios (a) and sparsity types (b) on Wanda. • Wikipedia3 is a source of high-quality encyclopedic text. We use the first shard of the cleaned • Slimpajama4 is a cleaned and deduplicated version of RedPajama. It is a high-quality pre-training extracted from Common Crawl. We sample from a subset5 of the DCLM. ', 'after_paragraph_idx': 13, 'before_paragraph_idx': 13}, {'section': '3.1 EXPERIMENTAL DETAILS', 'after_section': None, 'context_after': '3.2 HOW MUCH DOES CALIBRATION DATA AFFECT PRUNING PERFORMANCE? ', 'paragraph_idx': 18, 'before_section': '3.1 EXPERIMENTAL DETAILS', 'context_before': '2021), PIQA (Bisk et al., 2020), Hellaswag (Zellers et al., 2019), ARC-e, ARC-c (Clark et al., 2018) and MMLU (Hendrycks et al., 2021). For MMLU, we use a 5-shot setting, while all other tasks are evaluated in a zero-shot setting. Our evaluation code is based on the lm-evaluation-harness ', 'modified_lines': 'repository7. We report the average performance of these seven tasks. ', 'original_lines': 'repository6. We report the average performance of these seven tasks. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 18}, {'section': 'Abstract', 'after_section': None, 'context_after': 'have a negative effect at moderate sparsity levels. For instance, at 60% sparsity, using Wikipedia and Slimpajama as calibration data performs worse than magnitude pruning without any calibration data. For sparsity types, we observe that as the sparsity pattern becomes more structured, the choice ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'between different calibration data is minimal, less than 0.1%. As sparsity increases, the impact of calibration data on pruning gradually amplifies, rising from a 0.5% difference at 50% sparsity to 2.3% at 60% sparsity. Notably, as shown in Figure 6, inappropriate calibration data can even ', 'modified_lines': '', 'original_lines': ' 3https://huggingface.co/datasets/wikimedia/wikipedia 4https://huggingface.co/datasets/DKYoon/SlimPajama-6B 5https://huggingface.co/datasets/robbiegwaldd/dclm-micro 6https://github.com/EleutherAI/lm-evaluation-harness 4 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 Under review as a conference paper at ICLR 2025 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 (a) Wanda (b) DSnoT Figure 3: The impact of calibration data amount for different pre-training data resources (i.e., C4, Wikipedia, Slimpajama, DCLM) and pruning methods, i.e., Wanda (a) and DSnoT (b). Shaded areas represent the standard deviations of 20 random seeds. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'C4 OWL ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'the better the pruning performance. (2) The higher the quality of the calibration data, the better the pruning performance. ', 'modified_lines': '', 'original_lines': "To verify the hypotheses, we perform three post-training pruning methods on DCLM-7B with var- ious calibration data in the 2:4 semi-structured pruning setting. We report our results in Table 1. 5 &:LNLSHGLD6OLPDSMDPD'&/0&:LNLSHGLD6OLPDSMDPD'&/0 Under review as a conference paper at ICLR 2025 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 ", 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'modeling and commonsense reasoning capabilities. We do not use the Wikitext2 dataset, which is common in most papers for evaluating language modeling ability, as its similarity to Wikipedia may introduce bias when assessing the impact of different calibration data on language modeling ability. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'DCLM as baselines for calibration data, employing three post-training pruning methods: Wanda, DSnoT, and OWL, to prune the dense models. In the main experiments, we report performance at the 60% sparsity ratio. We follow previous work to evaluate the compressed LLMs’ language ', 'modified_lines': '', 'original_lines': ' 6 Under review as a conference paper at ICLR 2025 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 Table 2: Pruning performance of different calibration data on DCLM-7B in 60% sparsity ratio. The best performance method is indicated in bold. Wiki, Slim, and Syn are abbreviations for Wikipedia, SlimPajama, and our synthetic data, respectively. Underline means the improved performance of synthetic calibration data over the original calibration data for a certain task. ∆ denotes the average performance change of pruned models on commonsense reasoning tasks. ✓, ✗ and ✓✗ indicate that the calibration data belongs, does not belong, or partially belongs to DCLM-7B’s pretraining data, respectively. Data Pretrain Alpaca (↓) BoolQ Winogrande PIQA Hellaswag ARC-e ARC-c MMLU Avg. ∆ Wiki w/ Syn C4 w/ Syn Slim w/ Syn DCLM w/ Syn Wiki w/ Syn C4 w/ Syn Slim w/ Syn DCLM w/ Syn Wiki w/ Syn C4 w/ Syn Slim w/ Syn DCLM w/ Syn ✗ ✓✗ ✓✗ ✓ ✗ ✓✗ ✓✗ ✓ ✗ ✓✗ ✓✗ ✓ 9.99 9.40 9.67 9.57 9.76 9.58 9.54 9.59 10.16 9.40 9.81 9.56 9.87 9.62 9.70 9.52 9.96 9.20 9.52 9.31 9.59 9.32 9.38 9.28 72.05 78.73 78.47 78.81 78.56 78.51 79.11 79.23 69.97 77.58 76.11 75.61 75.58 76.08 77.39 76.56 75.27 78.45 78.14 78.55 78.09 78.56 78.45 78.80 Wanda 74.33 75.78 75.12 75.95 74.27 75.63 75.13 75.64 DSnoT 73.95 75.38 74.76 75.56 73.80 75.09 74.63 75.55 OWL 74.25 76.03 75.55 76.38 74.56 75.83 75.10 75.90 64.79 66.16 66.32 66.35 65.07 65.90 66.25 66.17 63.23 64.76 65.08 65.13 63.88 64.57 64.89 64.70 63.07 65.18 65.22 65.45 64.00 64.47 65.07 64.77 68.40 70.06 70.27 70.52 70.16 70.02 70.51 70.69 68.08 69.20 69.44 69.30 69.21 69.27 69.36 68.35 67.11 68.92 68.90 68.67 68.69 68.71 69.47 67.77 73.14 74.34 72.84 74.23 72.37 74.12 73.37 74.04 72.09 73.27 72.10 73.06 71.37 73.16 72.06 73.43 73.01 73.72 72.46 74.05 72.35 73.81 72.76 73.84 39.91 42.83 40.84 42.01 39.94 42.13 41.66 42.01 38.69 41.66 39.08 41.11 38.63 40.97 39.83 41.43 38.35 40.29 38.24 40.03 37.95 40.44 38.81 40.56 42.20 45.04 43.31 45.64 43.40 45.26 44.58 45.42 41.63 44.53 41.62 45.24 42.25 44.57 43.73 44.81 38.75 42.73 39.04 42.94 39.84 43.61 40.73 43.67 62.12 64.71 63.88 64.78 63.40 64.51 64.37 64.74 61.09 63.77 62.60 63.57 62.10 63.39 63.13 63.55 61.40 63.61 62.51 63.72 62.21 63.64 62.91 63.61 +2.59 +0.90 +1.11 +0.37 +2.68 +0.97 +1.29 +0.42 +2.21 +1.21 +1.43 +0.70 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'On LLaMA family models, the self-generated synthetic data also performs better than the origi- nal data, with improvements ranging from approximately 0.9% to 1.1%, and surpasses the C4 data by about 0.3% to 0.5%. Surprisingly, the performance of the self-generated calibration data even ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '2.6% compared to the original Wikipedia data, and even surpasses the commonly used C4 calibra- tion data, achieving an average increase of 0.8% to 1.2%. For C4 and Slimpajama, which partially overlap with the pretraining data, the self-generation strategy also yields a 0.9-1.5% improvement. ', 'modified_lines': '', 'original_lines': ' 7 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6.1', 'after_section': '6.1', 'context_after': 'We further validate the effectiveness of self-generated synthetic calibration data across more pruning settings. Table 3 illustrates the commonsense reasoning perfor- Slim DCLM Syn 69.07 53.97 64.82 61.03 66.17 63.61 6.2 HOW DOES PREFIX LENGTH AFFECT THE PERFORMANCE OF SYNTHETIC DATA? ', 'paragraph_idx': 35, 'before_section': '6.1', 'context_before': 'IS THE SYNTHETIC CALIBRATION DATA SUITABLE FOR OTHER PRUNING SETTINGS? ', 'modified_lines': 'mance of DCLM-7B during Wanda pruning using dif- ferent calibration data at unstructured 50% and 65% sparsity ratios, as well as semi-structured 4:8 and 2:4 settings. In all pruning settings, our synthetic calibra- tion data either matches or exceeds the performance of the optimal calibration data from the training set Table 3: Pruning performance of different calibration data. Setting C4 Wiki 50% 65% 4:8 2:4 69.43 69.26 57.22 56.10 66.27 66.17 62.52 62.31 69.62 58.14 66.28 62.88 69.64 58.11 67.02 63.61 7 Published as a conference paper at ICLR 2025 Table 2: Pruning performance of different calibration data on DCLM-7B in 60% sparsity ratio. The best performance method is indicated in bold. Wiki, Slim, and Syn are abbreviations for Wikipedia, SlimPajama, and our synthetic data, respectively. Underline means the improved performance of synthetic calibration data over the original calibration data for a certain task. ∆ denotes the average performance change of pruned models on commonsense reasoning tasks. ✓, ✗ and ✓✗ indicate that the calibration data belongs, does not belong, or partially belongs to DCLM-7B’s pretraining data, respectively. Data Pretrain Alpaca (↓) BoolQ Winogrande PIQA Hellaswag ARC-e ARC-c MMLU Avg. ∆ Wiki w/ Syn C4 w/ Syn Slim w/ Syn DCLM w/ Syn Wiki w/ Syn C4 w/ Syn Slim w/ Syn DCLM w/ Syn Wiki w/ Syn C4 w/ Syn Slim w/ Syn DCLM w/ Syn ✗ ✓✗ ✓✗ ✓ ✗ ✓✗ ✓✗ ✓ ✗ ✓✗ ✓✗ ✓ 9.99 9.40 9.67 9.57 9.76 9.58 9.54 9.59 10.16 9.40 9.81 9.56 9.87 9.62 9.70 9.52 9.96 9.20 9.52 9.31 9.59 9.32 9.38 9.28 72.05 78.73 78.47 78.81 78.56 78.51 79.11 79.23 69.97 77.58 76.11 75.61 75.58 76.08 77.39 76.56 75.27 78.45 78.14 78.55 78.09 78.56 78.45 78.80 Wanda 74.33 75.78 75.12 75.95 74.27 75.63 75.13 75.64 DSnoT 73.95 75.38 74.76 75.56 73.80 75.09 74.63 75.55 OWL 74.25 76.03 75.55 76.38 74.56 75.83 75.10 75.90 64.79 66.16 66.32 66.35 65.07 65.90 66.25 63.23 64.76 65.08 65.13 63.88 64.57 64.89 64.70 63.07 65.18 65.22 65.45 64.00 64.47 65.07 64.77 68.40 70.06 70.27 70.52 70.16 70.02 70.51 70.69 68.08 69.20 69.44 69.30 69.21 69.27 69.36 68.35 67.11 68.92 68.90 68.67 68.69 68.71 69.47 67.77 73.14 74.34 72.84 74.23 72.37 74.12 73.37 74.04 72.09 73.27 72.10 73.06 71.37 73.16 72.06 73.43 73.01 73.72 72.46 74.05 72.35 73.81 72.76 73.84 39.91 42.83 40.84 42.01 39.94 42.13 41.66 42.01 38.69 41.66 39.08 41.11 38.63 40.97 39.83 41.43 38.35 40.29 38.24 40.03 37.95 40.44 38.81 40.56 42.20 45.04 43.31 45.64 43.40 45.26 44.58 45.42 41.63 44.53 41.62 45.24 42.25 44.57 43.73 44.81 38.75 42.73 39.04 42.94 39.84 43.61 40.73 43.67 62.12 64.71 63.88 64.78 63.40 64.51 64.37 64.74 61.09 63.77 62.60 63.57 62.10 63.39 63.13 63.55 61.40 62.51 63.72 62.21 63.64 62.91 63.61 +2.59 +0.90 +1.11 +0.37 +2.68 +0.97 +1.29 +0.42 +2.21 +1.21 +1.43 +0.70 DCLM. Notably, the synthetic data improve perfor- mance by approximately 0.8% in the two semi-structured pruning settings. Since semi-structured pruning can achieve practical inference acceleration and advanced GPUs already support 2:4 sparse tensor cores. Thus, we think the self-generated synthetic calibration data will effectively enhance the performance of pruned models in real-world deployment. ', 'original_lines': 'Table 3: Pruning performance of differ- ent calibration data. mance of DCLM-7B during Wanda pruning using differ- ent calibration data at unstructured 50% and 65% spar- sity ratios, as well as semi-structured 4:8 and 2:4 settings. In all pruning settings, our synthetic calibration data ei- ther matches or exceeds the performance of the optimal calibration data from the training set DCLM. Notably, the synthetic data improve performance by approximately 0.8% in the two semi-structured pruning settings. Since semi-structured pruning can achieve prac- tical inference acceleration and advanced GPUs already support 2:4 sparse tensor cores. Thus, we think the self-generated synthetic calibration data will effectively enhance the performance of pruned models in real-world deployment. C4 Wiki Setting 69.26 56.10 69.43 66.28 62.88 66.27 62.52 69.62 62.31 58.11 57.22 58.14 67.02 69.64 65% 50% 2:4 4:8 ', 'after_paragraph_idx': 35, 'before_paragraph_idx': 35}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Song Guo, Jiahang Xu, Li Lyna Zhang, and Mao Yang. Compresso: Structured pruning with collab- orative prompting learns compact large language models, 2023. URL https://arxiv.org/ ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. OPTQ: Accurate quantization for generative pre-trained transformers. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=tcbBPnfwxS. ', 'modified_lines': '', 'original_lines': ' 10 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Namhoon Lee, Thalaiyasingam Ajanthan, and Philip Torr. SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY. In International Conference on Learn- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'file/6c9882bbac1c7093bd25041881277658-Paper.pdf. Optimal brain damage. ', 'modified_lines': '', 'original_lines': ' 11 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 Under review as a conference paper at ICLR 2025 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Alpaca: A strong, ', 'modified_lines': '', 'original_lines': '12 Under review as a conference paper at ICLR 2025 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': "&:LNLSHGLD6OLPSDMDPD'&/0&:LNLSHGLD6OLPSDMDPD'&/0 ", 'paragraph_idx': 2, 'before_section': None, 'context_before': 'B MORE RESULTS OF SYNTHETIC CALIBRATION DATA 15 ', 'modified_lines': '', 'original_lines': ' 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-02-26 15:22:42
ICLR.cc/2025/Conference
sAndr66PV9
PMJEp53aQ5
[{'section': 'Abstract', 'after_section': None, 'context_after': '3 THE IMPACT OF CALIBRATION DATA FOR PRUNING ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'task-specific calibration data helps improve performance on specific downstream tasks. Williams & Aletras (2024) observe in extensive experiments that the selection of calibration data in post-training pruning and quantization methods significantly impacts downstream tasks’ performance, especially ', 'modified_lines': 'post-training pruning, which is highly sensitive to calibration data. Shin et al. (2024) notice that the reconstruction error objective (Eq. 1) leads to overfitting on calibration data, and that self-generated calibration data can effectively mitigate the overfitting. Nevertheless, current research on calibration data remains under-explored, with few studies providing guidelines for selecting calibration data. Different from previous works, our paper (1) explores the impact of calibration data under varying sparsity ratios and types, (2) investigates the effect of data amount on various calibration data, not limited to the widely used C4 calibration data, (3) further addresses which calibration data is suitable for LLM pruning and provides a practical and effective method. ', 'original_lines': 'post-training pruning, which is highly sensitive to calibration data. Nevertheless, current research on calibration data remains under-explored, with few studies providing guidelines for selecting calibra- tion data. Different from previous works, our paper (1) explores the impact of calibration data under varying sparsity ratios and types, (2) investigates the effect of data amount on various calibration data, not limited to the widely used C4 calibration data, (3) further addresses which calibration data is suitable for LLM pruning and provides a practical and effective method. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.1 EXPERIMENTAL DETAILS', 'after_section': '3.1 EXPERIMENTAL DETAILS', 'context_after': 'Post-training Pruning Methods We choose three competitive and representative post-training pruning methods for evaluation: Wanda (Sun et al., 2024), DSnoT (Zhang et al., 2024d) and OWL (Yin et al., 2024). These methods apply to both unstructured and semi-structured pruning. Calibration Data We consider various data sources to be calibration data. Following the main- stream works, the calibration data sources are all from the unlabeled pre-trained corpus: ', 'paragraph_idx': 14, 'before_section': '3.1 EXPERIMENTAL DETAILS', 'context_before': 'fully open-source LLM (including training data), DCLM-7B2 (Li et al., 2024), as the dense model and conduct post-training pruning with different calibration data on it. ', 'modified_lines': '2https://huggingface.co/apple/DCLM-7B 3 Published as a conference paper at ICLR 2025 (a) sparsity ratio (b) sparsity type Figure 2: Pruning performance range (M ax.-M in.) of different datasets (C4, Wikipedia, Slimpa- jama, DCLM) under various sparsity ratios (a) and sparsity types (b) on Wanda. ', 'original_lines': ' 2https://huggingface.co/apple/DCLM-7B 3 Published as a conference paper at ICLR 2025 (a) sparsity ratio (b) sparsity type Figure 2: Pruning performance range (M ax.-M in.) of different datasets (C4, Wikipedia, Slimpa- jama, DCLM) under various sparsity ratios (a) and sparsity types (b) on Wanda. ', 'after_paragraph_idx': 14, 'before_paragraph_idx': 13}, {'section': 'Abstract', 'after_section': None, 'context_after': '3https://huggingface.co/datasets/allenai/c4 4https://huggingface.co/datasets/wikimedia/wikipedia ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'models inevitably consumes time and computational resources. Therefore, we wonder how signif- icant the impact of calibration data is on pruning performance and whether it’s worth our effort to seek optimal calibration data in research and practice. We consider different sparsity ratios and ', 'modified_lines': '', 'original_lines': 'sparsity types. Our experiments cover sparsity ratios ranging from 30% to 60%, and at 50% sparsity ratio, we further compare unstructured, 4:8 semi-structured, and 2:4 semi-structured sparsity types. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'We use Wanda as an example to illustrate the model’s performance range, defined as the difference between the maximum and minimum values, after pruning with four calibration data sets, as shown ', 'paragraph_idx': 4, 'before_section': None, 'context_before': 'Figure 3: The impact of calibration data amount for different pre-training data resources (i.e., C4, Wikipedia, Slimpajama, DCLM) and pruning methods, i.e., Wanda (a) and DSnoT (b). Shaded areas represent the standard deviations of 20 random seeds. ', 'modified_lines': ' sparsity types. Our experiments cover sparsity ratios ranging from 30% to 60%, and at 50% sparsity ratio, we further compare unstructured, 4:8 semi-structured, and 2:4 semi-structured sparsity types. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6.3 HOW DOES PERPLEXITY-BASED DATA FILTERING AFFECT PRUNING PERFORMANCE?', 'after_section': '6.3 HOW DOES PERPLEXITY-BASED DATA FILTERING AFFECT PRUNING PERFORMANCE?', 'context_after': '10% filter w/o filter 64.51 64.76 64.49 Wiki Data 9.40 9.47 9.42 9.99 ', 'paragraph_idx': 45, 'before_section': None, 'context_before': 'Alpaca (↓) Commonsense ', 'modified_lines': '40% filter 30% filter 20% filter 64.49 64.71 62.12 9.40 ', 'original_lines': '20% filter 30% filter 40% filter 62.12 64.71 64.49 9.40 ', 'after_paragraph_idx': 46, 'before_paragraph_idx': None}]
2025-03-12 03:09:18
ICLR.cc/2025/Conference
zYKmDQD0V9
0ylzpfFtry
[{'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'Keywords: Lean 4, Autoformalizing, LLM, Retrieval Augmented Generation, Dataset ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'translation framework for real-world applications. As a direct application of Herald translator, we have successfully translated a template section in the Stack project, marking a notable progress in the automatic formalization of graduate-level ', 'modified_lines': 'mathematical literature. Our model, along with the datasets, are open-sourced to the public.1 ', 'original_lines': 'mathematical literature. Our model, along with the datasets, will be open-sourced to the public soon. ', 'after_paragraph_idx': 2, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'to the strict requirements of formal languages, which can be burdensome for those accustomed to writing high-level, natural language proofs. ', 'paragraph_idx': 4, 'before_section': '1 INTRODUCTION', 'context_before': 'However, writing proofs in these formal languages requires significant effort and expertise. Mathe- maticians must navigate through unfamiliar theorem libraries and often engage in repetitive tasks due ', 'modified_lines': ' ∗Equal contribution. †Corresponding author. 1The model can be found at https://huggingface.co/FrenzyMath/Herald_translator. The code of Herald and LeanSearch can be found at https://github.com/frenzymath/herald_ translator. 1 Published as a conference paper at ICLR 2025 ', 'original_lines': '', 'after_paragraph_idx': 5, 'before_paragraph_idx': 4}, {'section': 'Abstract', 'after_section': None, 'context_after': 'such as using LLMs to annotate Lean corpora (Wang et al., 2024) and Expert Iteration (Ying et al., 2024a; Xin et al., 2024a). Yet, these methods do not fully leverage the detailed structural information provided by the Lean 4 compiler and the pyramid architecture of the Lean repository. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'However, the scarcity of parallel data between natural and formal languages, which consists of one-to-one pairs aligning natural language with its formal language counterpart, limits the progress of LLM-based translation approaches. To address this scarcity, existing works explore methods ', 'modified_lines': '', 'original_lines': ' 1 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 Under review as a conference paper at ICLR 2025 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'Project (See Appendix F.1), using DeepSeek-Prover-V1.5 (Xin et al., 2024b) to complete the proofs. In conclusion, our contributions are as follows: ', 'paragraph_idx': 9, 'before_section': '1 INTRODUCTION', 'context_before': 'Based on the Herald dataset, we fine-tuned a model for NL-FL statement translation. To validate the generated formalized statements, we apply both Lean compiler and LLM back-translation checks. ', 'modified_lines': 'Our model achieves 96.7% accuracy on miniF2F-test (Zheng et al., 2022) and 23.5% accuracy on our internal graduate-level textbook dataset, outperforming InternLM2-Math-Plus-7B (Ying et al., 2024b) (73.0% and 7.5%) and TheoremLlama (Wang et al., 2024) (55.0% and 4.0%). To demonstrate the model’s effectiveness in autoformalization, we apply the Herald translator to a section of the Stack ', 'original_lines': 'Our model achieves 96.7% accuracy on miniF2F-test(Zheng et al., 2022) and 23.5% accuracy on our internal graduate-level textbook dataset, outperforming InternLM2-Math-Plus-7B Ying et al. (2024b)(73.0% and 7.5%) and TheoremLlama (Wang et al., 2024) (55.0% and 4.0%). To demonstrate the model’s effectiveness in autoformalization, we apply the Herald translator to a section of the Stack ', 'after_paragraph_idx': 9, 'before_paragraph_idx': 9}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': 'Despite these efforts, the primary limitation of all the aforementioned methods lies in the intrinsic weaknesses of LLMs when applied to mathematics and logical reasoning. As a result, the generated ', 'paragraph_idx': 12, 'before_section': None, 'context_before': 'are first generated by the Herald translator and then fed into a powerful automatic theorem prover (e.g., DeepSeek Prover V1.5) to obtain the final formalized corpus. ', 'modified_lines': 'Extending beyond statements, auto-formalization of proofs presents a more complex challenge. Jiang et al. (2023b) and Xin et al. (2024c) propose frameworks that use in-context learning to generate formal proof sketches from LLM-produced natural language proofs. These sketches are then complemented by auto-theorem-proving tools such as sledgehammer in Isabelle to fill in any gaps. Wang et al. (2024) and Shao et al. (2024) generate complete formal proofs with natural language in-line comments using fine-tuned LLMs, with Wang et al. (2024) also capable of translating both natural language statements and proofs. NL-FL dataset generation The pursuit of auto-formalization faces significant challenges primarily due to the shortage of high-quality and high-quantity NL-FL pairs as training data. Current efforts in generating these pairs still face substantial limitations. Several approaches (Jiang et al., 2023a; Lin et al., 2024; Wang et al., 2024; Ying et al., 2024a) have recently attempted to address this issue by leveraging LLMs to generate NL-FL pairs. Specifically, MMA (Jiang et al., 2023a) uses LLMs to generate 88K NL statements, starting from formal statements extracted by the LeanDojo framework. Lean-STaR (Lin et al., 2024) takes a different approach by generating NL ‘proof thoughts’ at each step of a formal proof, producing 52,438 NL proof thoughts based on theorems in the LeanDojo library. TheoremLlama (Wang et al., 2024) enhances the process by introducing a bootstrapping technique where NL proofs are integrated into Lean4 code to create a training dataset. Lean Workbook (Ying et al., 2024a) proposes a novel pipeline that iteratively generates and filters synthetic data to translate natural language mathematical problems (extracted from math contest forums) into Lean 4 statements and vice versa. ', 'original_lines': 'MMA (Jiang et al., 2023a) uses LLMs to generate 88K NL statements, starting from formal statements extracted by the LeanDojo framework. Lean-STaR (Lin et al., 2024) takes a different approach by generating NL ‘proof thoughts’ at each step of a formal proof, producing 52,438 NL proof thoughts based on theorems in the LeanDojo library. TheoremLlama (Wang et al., 2024) enhances the process by introducing a bootstrapping technique where NL proofs are integrated into Lean4 code to create a training dataset. Lean Workbook (Ying et al., 2024a) proposes a novel pipeline that iteratively generates and filters synthetic data to translate natural language mathematical problems (extracted from math contest forums) into Lean 4 statements and vice versa. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '3.1 NL-FL DATA GENERATION This subsection details the process behind creating the Herald dataset, a large-scale collection of NL- ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'formalization model on the Herald dataset. These steps collectively contribute to enhancing the autoformalization performance of LLMs within the Lean4 environment. ', 'modified_lines': '', 'original_lines': '3 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '3.1 NL-FL DATA GENERATION', 'after_section': '3.1 NL-FL DATA GENERATION', 'context_after': 'from Mathlib4. Lean-Jixia parses Lean files to extract key metadata, including theorem declarations, proof structures, and dependency relationships. We select five main components to enhance the FL to NL translation process: head statements, kind, docstrings, neighbor statements, and dependent ', 'paragraph_idx': 18, 'before_section': '3.1 NL-FL DATA GENERATION', 'context_before': 'Structured Information and Contextual Augmentation The first step in our methodology in- volves extracting essential components from Lean code that encapsulate formal statements. We utilize ', 'modified_lines': 'Lean-Jixia2, a static analysis tool specifically designed for Lean 4, to extract structured information ', 'original_lines': 'Lean-Jixia1, a static analysis tool specifically designed for Lean 4, to extract structured information ', 'after_paragraph_idx': 18, 'before_paragraph_idx': 18}, {'section': '3.1 NL-FL DATA GENERATION', 'after_section': None, 'context_after': 'Figure 2: Illustration of how related instances are retrieved: NL-FL statement examples are embedded and stored in a vector database. The statement being informalized is treated as a query and embedded by the same model, which is designed to account for mathematical similarity. The vector database then retrieves a list of relevant theorems based on cosine similarity of the embeddings. relevant to the new theorem. This context-aware guidance ensures that the LLM’s translation maintains the technical precision and conceptual integrity of the original theorem. ', 'paragraph_idx': 21, 'before_section': '3.1 NL-FL DATA GENERATION', 'context_before': 'theorem to be translated in this embedding space is then placed in the instruction set of the LLM, thereby enhancing the quality of translation. ', 'modified_lines': 'By calculating the proximity between the embedding of the theorem that requires translation and the embeddings of the annotated examples, we can effectively determine the most relevant precedent or analogous theorem. Incorporating the closest matching theorems into the LLM’s instructions functions as a contextual anchor, which guides the model in understanding the specific mathematical domain and terminology 2https://github.com/reaslab/Lean-Jixia 4 Published as a conference paper at ICLR 2025 ', 'original_lines': '1https://github.com/reaslab/Lean-Jixia 4 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 Under review as a conference paper at ICLR 2025 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 By calculating the proximity between the embedding of the theorem that requires translation and the embeddings of the annotated examples, we can effectively determine the most relevant precedent or analogous theorem. Incorporating the closest matching theorems into the LLM’s instructions functions as a contextual anchor, which guides the model in understanding the specific mathematical domain and terminology ', 'after_paragraph_idx': None, 'before_paragraph_idx': 20}, {'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'To address these challenges, we introduce 2 innovative augmentation techniques designed to both expand the dataset, and align it more closely with the real-world distribution of mathematical ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'fitting, while the deformalization process often produces informal statements that deviate from the natural distribution found in textbooks, resulting in rigid, repetitive, or overly precise content that hinders generalization. ', 'modified_lines': '', 'original_lines': ' 5 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': 2, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '• Multi-linguistic Translation We generate additional informal statements by translating the formal statements into Chinese, French and Russian. This results in a set of informal statements that ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'textbooks and research papers, certain conditions are often omitted because they are considered obvious or conventionally understood by the reader. For example, a theorem might not explicitly mention the requirement that a function be continuous if that is implied by the context. ', 'modified_lines': '', 'original_lines': ' 6 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 Under review as a conference paper at ICLR 2025 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 Figure 4: Demonstration of LLM Informal Augmentation ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Our data preparation process involved several key steps to ensure a comprehensive and balanced dataset. We began by collecting 580k NL-FL pairs from the Herald dataset. From this, we cre- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'We selected DeepSeek-Prover-V1.5-Base 7B as our base model due to its extensive training on formal programming languages like Lean, which provides a strong foundation for formal reasoning tasks. ', 'modified_lines': '', 'original_lines': ' 7 Under review as a conference paper at ICLR 2025 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 STATEMENT FORMALIZING MODEL', 'after_section': '4.1 STATEMENT FORMALIZING MODEL', 'context_after': '1. Translation: Using our trained model to translate informal statements from the test set into ', 'paragraph_idx': 42, 'before_section': '4.1 STATEMENT FORMALIZING MODEL', 'context_before': '4.1.2 VALIDATION PIPELINE ', 'modified_lines': 'For validation, we adopt the pipeline from the Lean Workbook (Ying et al., 2024a), which includes several key steps: ', 'original_lines': 'For validation, we adopt the pipeline from the LeanWorkbook project (Ying et al., 2024a), which includes several key steps: ', 'after_paragraph_idx': 42, 'before_paragraph_idx': 42}, {'section': 'Abstract', 'after_section': None, 'context_after': 'miniF2F (Zheng et al., 2022) A widely-used benchmark dataset for formal mathematics. Extract Theorem A custom dataset compiled by extracting theorems from advanced undergraduate- level textbooks using OCR on scanned materials. It covers a wide range of mathemati- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Table 2: Performance comparison of different models across various datasets. The last two datasets (Extract Theorem and College CoT) are shuffled subsets of 200 samples each. ', 'modified_lines': '', 'original_lines': '4.1.3 RESULT To evaluate the performance of our model, we conducted comprehensive tests comparing Herald with several models in similar settings. Our test suite included a diverse range of datasets: 8 Under review as a conference paper at ICLR 2025 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'theorems, and removing an unnecessary condition in one of them. Notably, the model demonstrated strong understanding of the content, achieving both mathematical and programming correctness. For prover configuration, we use DeepSeek-Prover-V1.5-RL + RMaxTS with 4 × 512 sample budget), ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'The generation was completed efficiently using a 16-pass setting, with human checks revealing only two necessary theorem modifications: correcting the conclusion in two of the auto-formalized ', 'modified_lines': '', 'original_lines': ' 9 Under review as a conference paper at ICLR 2025 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Haohan Lin, Zhiqing Sun, Yiming Yang, and Sean Welleck. Lean-star: Learning to interleave thinking and proving, 2024. URL https://arxiv.org/abs/2407.10040. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'provers with informal proofs. In The Eleventh International Conference on Learning Representa- tions, 2023b. ', 'modified_lines': '', 'original_lines': '10 Under review as a conference paper at ICLR 2025 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. MiniF2F: A cross-system benchmark In The Tenth International Conference on Learning ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Jiawei Hong, Kuikun Liu, Ziyi Wang, et al. InternLM-Math: Open math large language models toward verifiable reasoning. arXiv preprint arXiv:2402.06332, 2024b. ', 'modified_lines': '', 'original_lines': '11 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-02-27 06:51:16
ICLR.cc/2025/Conference
pKRBKVjnhT
JLyXcaKRa0
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'Residual policy learning (Silver et al., 2018; Johannink et al., 2019) offers an efficient approach to learning challenging tasks by training a policy to output residual actions using RL, where a subopti- mal base policy is provided. This approach has the potential to address the optimization challenges in multi-task RL (Wu et al., 2024b), particularly when the base policy can effectively explore all tasks. Motivated by this, we propose ResDex to train a residual multi-task policy for universal dexterous grasping. The key question then becomes, how to efficiently acquire a base policy that possesses some generalizability to grasp a wide range of objects? Directly applying multi-task RL to all objects leads to worse results due to multi-task gradient interference (Yu et al., 2020) and re- ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'vestigate how to directly learn a multi-task dexterous grasping policy across thousands of objects, which enables both efficient learning and enhanced generalization. ', 'modified_lines': '†Correspondence to Zongqing Lu <[email protected]>. 1 Published as a conference paper at ICLR 2025 ', 'original_lines': ' 1 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 Under review as a conference paper at ICLR 2025 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 4}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '2 Residual Policy Learning provides an effective approach to learn challenging RL tasks when a base policy is available. In robotics, residual policy learning is extensively applied in both manipulation ', 'paragraph_idx': 11, 'before_section': None, 'context_before': '2 RELATED WORK Dexterous Grasping (Pons et al., 1999; Kappassov et al., 2015) continues to be a formidable chal- ', 'modified_lines': 'lenge, given the high degrees of freedom in multi-fingered robotic hands and the complex geometries and physical properties of real-world objects. A fundamental task in dexterous grasping is to gen- erate grasping poses. Recent studies have employed various methods such as contact points (Shao et al., 2020; Wu et al., 2022), affordance maps (Brahmbhatt et al., 2019; Jiang et al., 2021), natural hand annotations (Wei et al., 2023; Hang et al., 2024), and grasping datasets (Chao et al., 2021; Wang et al., 2023) to train models for synthesizing hand grasping poses. While generating target grasping poses is crucial, successfully completing a grasp also requires close-loop policies that can manage Published as a conference paper at ICLR 2025 the entire trajectory. In learning dexterous grasping policies, both imitation learning (Qin et al., 2022b; Mandikal & Grauman, 2022) and reinforcement learning (RL) (Rajeswaran et al., 2017; Wu et al., 2024b; Yuan et al., 2024; Zhou et al., 2024; Zhang et al., 2025) have shown promise. The latter offers scalable advantages across a variety of objects due to its independence from human data collection and the efficiency of simulation environments (Makoviychuk et al., 2021). Recent advancements in research explore universal dexterous grasping using RL for thousands of objects. UniDexGrasp (Xu et al., 2023) and UniDexGrasp++ (Wan et al., 2023) introduce curriculum learn- ing and a teacher-student framework to enable training on numerous objects. UniDexFPM (Wu et al., 2024a) extends these approaches to universal functional grasping tasks. In our study, we propose an improved RL method for universal dexterous grasping that is more efficient and demonstrates superior performance and generalizability. ', 'original_lines': 'lenge, given the high degrees of freedom in multi-fingered robotic hands and the complex geome- tries and physical properties of real-world objects. A fundamental task in dexterous grasping is to generate grasping poses. Recent studies have employed various methods such as contact points (Shao et al., 2020; Wu et al., 2022), affordance maps (Brahmbhatt et al., 2019; Jiang et al., 2021), natural hand annotations (Wei et al., 2023; Hang et al., 2024), and grasping datasets (Chao et al., 2021; Wang et al., 2023) to train models for synthesizing hand grasping poses. While generating target grasping poses is crucial, successfully completing a grasp also requires close-loop policies that can manage the entire trajectory. In learning dexterous grasping policies, both imitation learn- ing (Qin et al., 2022b; Mandikal & Grauman, 2022) and reinforcement learning (RL) (Rajeswaran et al., 2017; Wu et al., 2024b; Zhang et al., 2025) have shown promise. The latter offers scalable advantages across a variety of objects due to its independence from human data collection and the efficiency of simulation environments (Makoviychuk et al., 2021). Recent advancements in research Under review as a conference paper at ICLR 2025 explore universal dexterous grasping using RL for thousands of objects. UniDexGrasp (Xu et al., 2023) and UniDexGrasp++ (Wan et al., 2023) introduce curriculum learning and a teacher-student framework to enable training on numerous objects. UniDexFPM (Wu et al., 2024a) extends these approaches to universal functional grasping tasks. In our study, we propose an improved RL method for universal dexterous grasping that is more efficient and demonstrates superior performance and generalizability. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.2 THE TEACHER-STUDENT FRAMEWORK FOR UNIVERSAL DEXTEROUS GRASPING', 'after_section': '3.2 THE TEACHER-STUDENT FRAMEWORK FOR UNIVERSAL DEXTEROUS GRASPING', 'context_after': 'in two stages to address these issues. First, a state-based policy πS using privileged object information to master all tasks. Then, this policy is distilled into a vision- based policy using DAgger (Ross et al., 2011), an online imitation learning method. ', 'paragraph_idx': 18, 'before_section': '3.2 THE TEACHER-STUDENT FRAMEWORK FOR UNIVERSAL DEXTEROUS GRASPING', 'context_before': 'Directly optimizing the vision-based policy using RL faces challenges due to gradient interference (Yu et al., 2020) in multi-task RL and the high dimensionality of point cloud observations. Recent works (Xu et al., 2023; Wan et al., 2023; Wu et al., 2024a) have adopted a teacher-student framework ', 'modified_lines': 't , cω, at−1) is trained ', 'original_lines': 't , cω, at−1) is trained ', 'after_paragraph_idx': 18, 'before_paragraph_idx': 18}, {'section': '4.1 LEARNING GEOMETRY-AGNOSTIC POLICIES', 'after_section': '4.1 LEARNING GEOMETRY-AGNOSTIC POLICIES', 'context_after': 'The grasping proposal reward rproposal inherently leaks the object’s geometric information, as the target relative wrist pose specifies “where to grasp on the object”. To mitigate this unwanted infor- mation leakage and enhance generalization, we replace this term with a pose reward: ', 'paragraph_idx': 23, 'before_section': None, 'context_before': 'policies within a mixture-of-experts (MoE) framework. ResDex demonstrates efficient training and robust generalization to unseen objects. ', 'modified_lines': 'ψ (at|Jt, bp tends to learn more generalizable grasping strategies and rely on the proprioceptive feedback to adjust actions. Although we cannot use a fully blind policy in our setting – as the agent must know the object’s location to approach it – we integrate this insight by proposing a geometry-agnostic base policy, πB t , at−1), which uses only robot proprioception J and the 3D position of the object bp. ', 'original_lines': '', 'after_paragraph_idx': 23, 'before_paragraph_idx': None}, {'section': '4.2 RESIDUAL MULTI-TASK REINFORCEMENT LEARNING', 'after_section': '4.2 RESIDUAL MULTI-TASK REINFORCEMENT LEARNING', 'context_after': 't , cω, at−1), is parameterized by ϕ. It utilizes all available state-based observations to better maximize performance in solving POMDPs. Given the pre-trained base policy πB from the complete observations to compute a base action aB ', 'paragraph_idx': 26, 'before_section': '4.2 RESIDUAL MULTI-TASK REINFORCEMENT LEARNING', 'context_before': 'While the base policy trained on a single object offers some degree of generalizability across various objects, it typically achieves a low overall success rate. To address this, we introduce residual policy learning to develop a policy that masters all objects. ', 'modified_lines': 'The state-based residual policy, denoted as πR ', 'original_lines': 'The state-based residual policy, denoted as πR ', 'after_paragraph_idx': 26, 'before_paragraph_idx': 26}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Hanwen Jiang, Shaowei Liu, Jiashun Wang, and Xiaolong Wang. Hand-object contact consistency reasoning for human grasps generation. In Proceedings of the IEEE/CVF international conference ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bam- ford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. Mixtral of experts. arXiv preprint arXiv:2401.04088, 2024a. ', 'modified_lines': '', 'original_lines': ' 11 Under review as a conference paper at ICLR 2025 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Gerrit Schoettler, Ashvin Nair, Jianlan Luo, Shikhar Bahl, Juan Aparicio Ojea, Eugen Solowjow, and Sergey Levine. Deep reinforcement learning for industrial insertion tasks with visual inputs and natural rewards. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'tured prediction to no-regret online learning. In Proceedings of the fourteenth international con- ference on artificial intelligence and statistics, 2011. ', 'modified_lines': '', 'original_lines': '12 Under review as a conference paper at ICLR 2025 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-02-06 14:30:25
ICLR.cc/2025/Conference
JLyXcaKRa0
p89DaSxT4A
[]
2025-02-26 12:04:22
ICLR.cc/2025/Conference
43Ef1OY1dX
Mmva5kEdPa
[{'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'INTRODUCTION ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'quences and their corresponding 3D structures eliminating the need for a two-stage generation approach. Moreover, DPLM-2 demonstrates competitive performance in various conditional generation tasks, including folding, inverse folding, and ', 'modified_lines': 'scaffolding with multimodal motif inputs, as well as providing structure-aware representations for predictive tasks. ', 'original_lines': 'scaffolding with multimodal motif inputs. ', 'after_paragraph_idx': 3, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': '1 encoder to yield invariant backbone geometric features, a lookup-free quantizer (LFQ) to discretize DPLM-2 as a protein foundation model: (1) unconditional protein sequence-structure mixed-modal co-generation; (2) protein sequence-structure joint representation for predictive tasks; (3) structure prediction; (4) fixed-backbone sequence generation; (5) conditional protein generation with structure- sequence mixed-modal input and output. performance in co-generating structurally-compatible sequences and consequently resorts to instance- level knowledge distillation from ProteinMPNN (Dauparas et al., 2022). Furthermore, it completely falls short in protein folding for given sequences, showing Mulitflow’s inadequacy in sequence ', 'paragraph_idx': 3, 'before_section': 'Abstract', 'context_before': 'generative models that can integrate both sequence and structure, enabling a more comprehensive understanding of protein behaviors and functions. This, therefore, raises the following question: ', 'modified_lines': '∗This work was done during Xinyou’s internship at ByteDance Research. Published as a conference paper at ICLR 2025 Figure 1: Overall illustration of DPLM-2. (A) Structure tokenization consists of a GVP-based encoded structural features into structure tokens within a codebook, and an IPA-based decoder as de-tokenizer to convert structure tokens back to backbone atomic coordinates. (B) Multimodal learning and generation of protein structure and sequence with DPLM-2. (C) Various applications of Can we build a multimodal protein foundation model to simultaneously model, understand, and generate both sequences and structures? To pursue this goal, Multiflow (Campbell et al., 2024) is a recent effort for structure-sequence co-generation that incorporates sequences into structure-based generative models using multimodal flow matching. Despite its impressive structure generation capability, Multiflow exhibits suboptimal ', 'original_lines': 'Can we build a multimodal protein foundation model to simultaneously model, understand, and generate both sequences and structures? To pursue this goal, Multiflow (Campbell et al., 2024) is a recent effort for structure-sequence co-generation that incorporates sequences into structure-based generative models using multimodal flow matching. Despite its impressive structure generation capability, Multiflow exhibits suboptimal 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 Under review as a conference paper at ICLR 2025 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 Figure 1: Overall illustration of DPLM-2. (A) structure tokenization consists of a GVP-based encoded structural features into structure tokens within a codebook, and an IPA-based decoder as de-tokenizer to convert structure tokens back to backbone atomic coordinates. (B) multimodal learning and generation of protein structure and sequence with DPLM-2. (C) various applications of ', 'after_paragraph_idx': None, 'before_paragraph_idx': 3}, {'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'sequence-based generative language models like DPLM, with their strong sequence generation and predictive abilities, hold great promise as a foundation for multimodal learning for proteins. Despite its exciting potential, this approach presents two key challenges: (1) language models cannot directly ', 'paragraph_idx': 6, 'before_section': 'Abstract', 'context_before': 'implicitly capture structural information enables direct structure prediction (Lin et al., 2022). As a consequence, the limitation in sequence understanding and generation renders Multiflow inadequate as a multimodal protein generative foundation. ', 'modified_lines': 'Inspired by the connection between evolutionary knowledge and spatial interactions, we deem that ', 'original_lines': 'Inspired by the connection between evolutionary knowledge and spatial interactions, we suggest that ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 6}, {'section': 'Abstract', 'after_section': None, 'context_after': 'learning in DPLM-2: (1) the core difficulty lies in enabling the language model to learn structural information, which is challenging and remains elusive, for which we develop a lookup-free quanti- zation (LFQ, Yu et al., 2023) structure tokenizer to convert 3D coordinates to discrete tokens and vice versa (Fig. 1A, §3.3); (2) we implement an efficient warm-up strategy to exploit the connection between large-scale evolutionary data and structural inductive biases from pre-trained sequence-based DPLM (Fig. 1B, §3.2); and (3) we also address the exposure bias problem in discrete diffusion for sequence learning (Ranzato et al., 2016; Bengio et al., 2015) by a self-mixup training strategy that ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'In this paper, we address the aforementioned questions by introducing DPLM-2, a multimodal protein foundation model that advances the state-of-the-art discrete diffusion-based protein language model (i.e., DPLM) to accommodate both sequences and structures. By training on both experimental and ', 'modified_lines': 'high-quality synthetic structures, DPLM-2 learns the joint distribution of sequence and structure, 2 structuretokenizerstructurede-tokenizerIPAGVPdiscretestruct tokensPDB + AFDB-SwissProt (200K)structure dataencodedecodex=ℝL×Nbackb×3˜x=ℝL×Nbackb×3fape lossDPLM-2Transformer Layer (Bidirectional Multihead Attention + MLP)structure tokenizerMKTVRQERLKYRAamino-acid tokenizerCApplications(1) unconditional protein generation (struct-seq mixed-modal co-generation)AStructure TokenizationBTraining and Sampling of Multimodal Diffusion Protein Language Model (DPLM-2)UniRef-50 (45M)evolutionary scale sequence datawarmup from pre-trained sequence-based DPLMDPLM-2struct. de-tok.(3) folding (seq-cond. structure generation)DPLM-2struct. de-tok.(4) inv-folding (struct-cond. sequence generation)DPLM-2struct. tokenizerMKTVRQERLKMTKYAKRQERYARDPLM-2struct. de-tok.MKTVRQERLKYRA✘✘✘✘✘RQER✘✘✘(5) conditional protein generation (e.g., motif-scaffolding with struct-seq mixed-modal input & output)x(t)x(t-1)x(T)x(0)⋯DPLM-2⋯cross-entropy123456789123456789residue index(ground-truth) tokens masked tokensforward discrete diffusion x Niterative denoising generationDPLM-2struct. tokenizerclassifier/regressor(2) struct-aware protein representations for downstream predictive tasksstruct tokensamino-acid tokensMKTVRQERLKs={1...8192}L00521256000747898012lookup-free quantizer (LFQ)⋯struct. tokenizermaskmaskmaskmaskmaskmaskmaskMKTVRQERLKMKTVRQERLKMKTVRQERLKLoRAx(0)x(t)x(0)~ Published as a conference paper at ICLR 2025 as well as their marginals and conditionals. We present several key recipes to facilitate multimodal ', 'original_lines': 'high-quality synthetic structures, DPLM-2 learns the joint distribution of sequence and structure, as well as their marginals and conditionals. We present several key receipts to facilitate multimodal 2 structuretokenizerstructurede-tokenizerIPAGVPdiscretestruct tokensPDB + AFDB-SwissProt (200K)structure dataencodedecodex=ℝL×Nbackb×3˜x=ℝL×Nbackb×3fape lossDPLM-2Transformer Layer (Bidirectional Multihead Attention + MLP)structure tokenizerMKTVRQERLKYRAamino-acid tokenizerCApplications(1) unconditional protein generation (struct-seq mixed-modal co-generation)AStructure TokenizationBTraining and Sampling of Multimodal Diffusion Protein Language Model (DPLM-2)UniRef-50 (45M)evolutionary scale sequence datawarmup from pre-trained sequence-based DPLMDPLM-2struct. de-tok.(3) folding (seq-cond. structure generation)DPLM-2struct. de-tok.(4) inv-folding (struct-cond. sequence generation)DPLM-2struct. tokenizerMKTVRQERLKMTKYAKRQERYARDPLM-2struct. de-tok.MKTVRQERLKYRA✘✘✘✘✘RQER✘✘✘(5) conditional protein generation (e.g., motif-scaffolding with struct-seq mixed-modal input & output)x(t)x(t-1)x(T)x(0)⋯DPLM-2⋯cross-entropy123456789123456789residue index(ground-truth) tokens masked tokensforward discrete diffusion x Niterative denoising generationDPLM-2struct. tokenizerclassifier/regressor(2) struct-aware protein representations for downstream predictive tasksstruct tokensamino-acid tokensMKTVRQERLKs={1...8192}L00521256000747898012lookup-free quantizer (LFQ)⋯struct. tokenizermaskmaskmaskmaskmaskmaskmaskMKTVRQERLKMKTVRQERLKMKTVRQERLKLoRAx(0)x(t)x(0)~ Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'ranging from (sequence-conditioned) folding (Fig. 1C(3), §4.2), (structure-conditioned) inverse-folding (Fig. 1C(4), §4.3), to more successful motif-scaffolding given multimodal motif conditioning (Fig. 1C(5), §4.4). (iv) Last but not least, we demonstrate that the structure-aware protein representation learned by DPLM-2 brings additional benefit for a range of protein predictive tasks (Fig. 1C(2), §4.5). 2 PRELIMINARIES ', 'paragraph_idx': 6, 'before_section': 'Abstract', 'context_before': 'of high-quality data, a decent structure tokenizer and publicly-accessible sequence-only pre-trained language models. ', 'modified_lines': '(ii) As a mulitmodal generative model, DPLM-2 enables unconditional protein co-generation of both structure and sequence, which demonstrates good structure-sequence consis- tency (Fig. 1C(1)). Our empirical evaluation shows that DPLM-2 attains competitive co-generation performance compared to structure-based generative approaches, while the proteins generated by DPLM-2 have a better alignment with the characteristics of natural proteins in secondary structure statistics (§4.1). (iii) In addition, DPLM-2 supports various conditional generation tasks by its multimodal nature, Concurrent work. During the development of DPLM-2, we became aware of the recently pro- posed multimodal generative protein language model, ESM3 (Hayes et al., 2024), which also jointly models tokenized structure and sequence using a generative masked language model. While both models aim for similar goals, DPLM-2 differs from ESM3 in several key aspects: (1) Multimodal protein generation: DPLM-2 treats structure and sequence modalities equally by design and em- phasizes the simultaneous co-generation of compatible protein sequence and structure, whereas ESM3 is a sequence-first model (other modalities are subject to dropout during training) and gener- ates in cascaded modality-by-modality manner. (2) Data and compute efficiency: ESM3 seeks to perform mulimodal pre-training from scratch using a huge amount of synthetic data, with modal size ranging from 1.4B to 98B. With strict license and absence of training infrastructure, this pro- hibits community from replicating for customized purposes. In contrast, DPLM-2 leverages much smaller datasets (PDB + SwissProt) and builds on open-source, pre-trained sequence-based DPLM (150M/650M/3B), which leverages DPLM’s learned evolutionary knowledge and inherits strong sequence understanding and generation capabilities. We are also committed to open-source our models, training and inference code to democratize multimodal generative protein LMs to benefit the community. Overall, we believe DPLM-2 provides unique contributions to the community. ', 'original_lines': '(ii) As a mulitmodal generative model, DPLM-2 enables unconditional co-generation of designable and diverse proteins that guarantees consistency between structure and se- quence (Fig. 1C(1)). Our empirical evaluation show that DPLM-2 attains competitive co- generation performance compared to structure-based generative approaches, while DPLM- 2’s generated proteins better align with the characteristics of natural proteins regarding secondary structure statistics (§4.1). (iii) In addition, DPLM-2 allows various conditional generation tasks by its multimodal nature, Concurrent work. During the development of DPLM-2, we became aware of the recently proposed multimodal generative protein language model, ESM3 (Hayes et al., 2024), which also jointly models tokenized structure and sequence using a generative masked language model. While both models aim for similar goals, DPLM-2 differs from ESM3 in several key aspects: (1) Multimodal protein generation: DPLM-2 treats structure and sequence modalities equally by design and emphasizes the simultaneous co-generation of compatible protein sequence and structure, whereas ESM3 is a sequence-first model (other modalities are subject to dropout during training) and generates in cascaded modality-by-modality manner. (2) Data and compute efficiency: ESM3 seeks to perform mulimodal pre-training from scratch using a huge amount of synthetic data, with modal size ranging from 1.4B to 98B. With strict license and absence of training infrastructure, this prohibits community from replicating for customized purposes. In contrast, DPLM-2 leverages much smaller datasets (PDB + SwissProt) and builds on open-source, pre-trained sequence-based DPLM (150M/650M/3B), which leverages DPLM’s learned evolutionary knowledge and inherits strong sequence understanding and generation capabilities. We are also committed to open-source our models, training and inference code to democratize multimodal generative protein LM to benefit the community. Overall, we believe DPLM-2 provides unique contributions to the community. ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 6}, {'section': '4 EXPERIMENTSIn this section, we evaluate DPLM-2 on various generative and understanding scenarios, includingunconditional protein generation (structure, sequence, and structure-sequence co-generation, §4.1),and a variety of conditional tasks, such as folding (§4.2), inverse folding (§4.3) and motif-scaffolding(§4.4), and a series of protein predictive tasks (§4.5).', 'after_section': None, 'context_after': '} { 3 is the real-value Cartesian coordinates of its residue xi ∈ atoms (we only consider backbone atoms herein, i.e., [N, Cα, C, O] with Natoms = 4). Namely, ', 'paragraph_idx': 21, 'before_section': None, 'context_before': '= , and egorical variable for its amino acid type in ', 'modified_lines': 'S ', 'original_lines': 'S ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '1)-dimensional probability simplex. The forward process of discrete diffusion defines a Markov process governed by the transition kernel q(x(t) q(x(0)) ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'formance in both generation and representation learning of protein sequences. DPLM is grounded in absorbing discrete diffusion framework (Austin et al., 2021; Zheng et al., 2023a), which is char- acterized by a forward and backward Markov process. Let Cat(x; p) be a categorical distribution ', 'modified_lines': 'on protein sequence y parameterized by a vector p on ( ', 'original_lines': 'on protein sequence y parameterized by a vector p on ( ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '1 ·', 'after_section': '1 ·', 'context_after': 'Specifically, at time t, it first generates ˜x(0) from pθ( ·| x(t), x(0) = ˜x(0)). Within absorbing diffusion, the generation process can be viewed as an by q( ', 'paragraph_idx': 13, 'before_section': '1 ·', 'context_before': 'x(t), ˜x(0))pθ(˜x(0) − | ', 'modified_lines': '1) is sampled x(t)), then a less noisy x(t ', 'original_lines': 'x(t)), then a less noisy x(t 1) is sampled ', 'after_paragraph_idx': 13, 'before_paragraph_idx': 13}, {'section': 'Abstract', 'after_section': None, 'context_after': '3.3 LEARNING STRUCTURE TOKENIZATION The core difficulty of achieving a mulimodal protein LM lies in enabling the language model to learn structural information, which is challenging and remains elusive, Tokenizing continuous ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'intact and reduce the risk of catastrophic forgetting, we apply LoRA (Hu et al., 2021) to limit too much deviation to the original parameters. This approach not only lowers training costs compared to starting from scratch but also effectively transfers valuable evolutionary information. ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'VAE (Van Den Oord et al., 2017) framework can be summarized as follows: ', 'paragraph_idx': 5, 'before_section': None, 'context_before': 'et al., 2024; Liu et al., 2023; Gao et al., 2024; Lu et al., 2024). This allows language models to better learn the composition of local structural elements. However, how to learn an effective structure tokenizer remains an active research question. ', 'modified_lines': 'Structure tokenization und er a typical VQ- ', 'original_lines': 'Structure tokenization under a typical VQ- ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'We utilize a GVP-based (Jing et al., 2020) structure encoder from pre-trained GVP-Transformer (Hsu RL . Given the encoded feature e = encoder(x) log2 |Z| k=1 ∈ } zi[k] ei[k] > 0 } { + 1 { is given by: zi ∈ ∀ The LFQ-based structure tokenizer is trained on the same structure dataset as mentioned before, using a combination of reconstruction, commitment, and entropy regularization losses, similar to standard Evaluation. As shown in Fig. 2A, LFQ significantly outperforms VQ-VAE regarding reconstruction accuracy while training of LFQ is much faster than VQ-VAE (2 vs. 15 days on 8 A100s). Increasing codebook size leads to improved reconstruction while a codebook size of 8192 achieves the best ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '∈ zi ∈ { |Z|} ', 'modified_lines': 'coordinates ˜x from the discrete tokens. et al., 2022) and a IPA-based (Jumper et al., 2021) structure decoder. In terms of quantizer, our preliminary experiment showed that conventional VQ-VAE pretty much struggles in training. To mitigate this, we instead adopts Lookup-Free Quantizer (LFQ) from the currently best visual tok- enizer (Yu et al., 2023) to protein structure tokenization. Specifically, the latent space of LFQ is decomposed as the Cartesian product of single-dimensional binary variables, as C = Ck, × log2 |Z|, each dimension where × (indexed by k) of the quantized representation quant(ei) is obtained from: Ck = 1, 1 } {− − − ≤ 11 . zi = index(quant(ei)) = Ci,k = sign(ei[k]) = quant(ei)[k] = As such, with LFQ, the token indices for z = 1 0 ei[k] > 0 } { z1, z2, ..., zi, ..., zL} { log2 |Z| 2k k=1 VQ-VAE. Here FAPE loss (Jumper et al., 2021) is used as the primary reconstruction loss. (see §B.1 for more details.) ', 'original_lines': 'coordinates from the discrete tokens. et al., 2022) with its parameters frozen during training. The structure encoder transforms backbone structures into geometric features, which are projected onto a latent embedding using an MLP layer. The structure decoder follows the IPA-based modules from AlphaFold2 (Jumper et al., 2021), using 4 EvoFormer layers without MSA row attention, following ESMFold (Lin et al., 2022), to generate atomic positions from the structure tokens. We train structure tokenizer using the same structure data as our mulitmodal language model, containing both experimental and high-quality structures. The training objective of structure tokenizer includes reconstruction loss, codebook commitment loss, and entropy regularization loss to ensure effective codebook utilization. For the reconstruction loss, we adopt the FAPE loss, violation loss, and distogram loss from AlphaFold2 (Jumper et al., 2021), measuring the difference between predicted and native structures. To further enhance the training, we introduce a sequence prediction head on top of the structure decoder’s final representation and minimize the cross-entropy against the native sequence. In terms of quantizer, our preliminary experiment showed that conventional VQ-VAE pretty much struggles in training. To mitigate this, we instead adopts Lookup-Free Quantizer (LFQ) from the currently best visual tokenizer (Yu et al., 2023) to protein structure tokenization. Specifically, the latent space of LFQ is decomposed as the Cartesian product of single-dimensional binary variables, log2 |Z|, as C = each dimension (indexed by k) of the quantized representation quant(ei) is obtained from: × × Ck = Ck, where quant(ei[k]) = As such, with LFQ, the token indices for z = 1, 1 {− Ci = sign(ei[k]) = ≤ − 1 0 { } z1, z2, ..., zi, ..., zL} − ei[k] > 0 } , . { log2 |S| k=1 z. (cid:80) 11 zi = index(ei) = 2k VQ-VAE. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 PRELIMINARIES', 'after_section': None, 'context_after': '4 EXPERIMENTS In this section, we evaluate DPLM-2 on various generative and understanding scenarios, including ', 'paragraph_idx': 11, 'before_section': '2 PRELIMINARIES', 'context_before': 'structure tokens and secondary structures. For instance, a lot of structure tokens concentrated at the alpha helix and beta sheet vertices, while some tokens lie between regions. This suggests that structure tokens the fine-grained structural elements in backbone local environment. ', 'modified_lines': ' (cid:80) z. , ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 11}, {'section': '4 EXPERIMENTSIn this section, we evaluate DPLM-2 on various generative and understanding scenarios, includingunconditional protein generation (structure, sequence, and structure-sequence co-generation, §4.1),and a variety of conditional tasks, such as folding (§4.2), inverse folding (§4.3) and motif-scaffolding(§4.4), and a series of protein predictive tasks (§4.5).', 'after_section': '4 EXPERIMENTSIn this section, we evaluate DPLM-2 on various generative and understanding scenarios, includingunconditional protein generation (structure, sequence, and structure-sequence co-generation, §4.1),and a variety of conditional tasks, such as folding (§4.2), inverse folding (§4.3) and motif-scaffolding(§4.4), and a series of protein predictive tasks (§4.5).', 'context_after': 'respectively. These results demonstrate DPLM-2’s effectiveness as a mulitmodal generative model. of [600, 700, 800, 900, 1000]. As shown in Fig. 3F, notably, for proteins exceeding the maximum training length of 512, the pLDDT scores of sequences sampled by DPLM-2 are close to those of DPLM. This suggests that DPLM-2 largely retains its original sequence generation capability (4) Case study. Fig. 3H shows some generated samples of DPLM-2 up to 700 residues, while in Fig. 3I we showcase that we can manipulate DPLM-2 to design symmetric oligomers by forcing to duplicate the predicted tokens with repetitive structure and sequence patterns. (5) Abaltion study on the training strategy. We investigate the effects of warmup from the sequence- based pre-trained DPLM and data augmentation with high-quality AlphaFold-predicted structures on DPLM-2. The sequence pre-training significantly improve both designability and diversity, while data augmentation can further enhance the designability, especially for long proteins. For more details 4.1.2 DPLM-2 GENERATES PROTEINS THAT RESEMBLES NATURAL PROTEINS ', 'paragraph_idx': 21, 'before_section': '4 EXPERIMENTSIn this section, we evaluate DPLM-2 on various generative and understanding scenarios, includingunconditional protein generation (structure, sequence, and structure-sequence co-generation, §4.1),and a variety of conditional tasks, such as folding (§4.2), inverse folding (§4.3) and motif-scaffolding(§4.4), and a series of protein predictive tasks (§4.5).', 'context_before': '(2) DPLM-2 can attains competitive performance with strong baselines on co-generation, as well as backbone-only and sequence-only generation, respectively. As shown in Tab. 2, DPLM-2 achieves the strong sc-TM compared to strong baselines, approaching the quality of native structures ', 'modified_lines': 'from PDB. We notice that ESM3-Open (Hayes et al., 2024), which runs in a sequence-then-structure order, fails short of unconditional generation. Compared to MultiFlow (Campbell et al., 2024), DPLM-2 achieves comparable co-generation quality. Notably, as also reported in Campbell et al. (2024), Multiflow falls short of sequence generation when directly trained from structures with native sequences, resulting in greatly degraded co-generation performance without data distillation from external inverse folding models (ProteinMPNN). For reference, we also provide the result of Multiflow retrained using our training data, where its co-generation performance remains unsatisfying and lags behind DPLM-2, which suggests that DPLM-2 has advantages of directly and effectively learning from complex structure-sequence joint distribution. Moreover, DPLM-2 can also only produce single modality if needed, where it matches the best competitive models in these settings (3) DPLM-2 generates longer proteins beyond training data. As DPLM-2 is trained with a 512 length cutoff, we are curious about its length extrapolation, and evaluate sampled proteins at lengths inherited from sequence pre-training in DPLM, leading to its capability of length extrapolation. 7 - sequence foldabilityACB- structure diversityscTMScore, ↑scRMSD, ↓- designability: structure-sequence compatibility FHLength: 100scRMSD: 0.53scTM: 0.97Length: 200scRMSD: 0.70scTM: 0.99Length: 300scRMSD: 0.95scTM: 0.98Length: 400scRMSD: 0.81scTM: 0.99Length: 600scRMSD: 3.45scTM: 0.99Length: 700scRMSD: 7.20scTM: 0.98Length: 500scRMSD: 1.30scTM: 0.97pLDDT, ↑#cluster, ↑- structure noveltypdb-TM, ↓- comparison wrt model size- long protein generation ED- case study of structure-sequence co-generated samplesI - showcase of designing symmetric oligomers Published as a conference paper at ICLR 2025 Table 2: Benchmarking comparison of unconditional protein generation, in terms of structure- sequence co-generation, backbone-only generation, and sequence-only generation. For each method, we generate 100 samples for lengths in [100, 200, 300, 400, 500]. * denotes Multiflow variants retrained by us using different dataset – native PDB data without ProteinMPNN distillation and the same training data as DPLM-2 (i.e., PDB+SwissProt), respectively. Structure-sequence Consistency Novelty Diversity scTM ( ↑ ) scRMSD ( ↓ ) pLDDT ( ) avg. pdb-TM ( ↑ ) avg. inner-TM ( ↓ ↓ ) ) MaxCluster ( ↑ struct) Structure-sequence co-generation. 0.904 Native PDB protein 0.624 ESM3-Open (1.4B, seq 0.930 MultiFlow w/ distillation (official ckpt) *MultiFlow w/o distillation 0.750 *MultiFlow (retrained on our training data) 0.871 0.907 DPLM-2 (650M, seq → 0.921 DPLM-2 (650M, struct DPLM-2 (650M, co-generation) 0.925 struct) seq) → → 0.129 4.623 0.232 24.180 3.208 0.098 9.306 0.163 6.580 0.934 6.337 0.117 4.969 0.098 3.899 0.085 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± 5.688 24.109 4.741 8.499 6.258 9.403 6.735 3.723 – – 79.447 61.519 62.624 82.246 81.910 82.686 Unconditional backbone generation. (sequence predicted by ProteinMPNN) Native PDB struct. (seq. from PMPNN) FrameDiff FoldFlow RFDiffusion DPLM-2 (650M) Unconditional sequence generation. (structures predicted by ESMFold) EvoDiff DPLM (650M) DPLM-2 (650M) 0.864 3.919 7.965 1.969 4.451 0.969 0.818 0.540 0.914 0.945 0.000 0.000 0.000 0.000 0.082 0.000 0.000 0.000 0.000 5.261 ± ± ± ± ± ± ± ± ± ± – – – – – – – – – – – 35.846 83.252 82.246 0.660 0.704 0.653 0.637 0.640 0.668 0.566 0.657 0.637 0.432 0.541 0.662 – ± ± – – ± ± ± – ± ± ± ± ± ± ± 0.000 0.000 0.195 0.195 0.204 0.000 0.000 0.000 0.195 0.106 0.187 0.199 0.262 0.220 0.356 0.350 0.331 0.280 0.308 0.287 0.262 0.444 0.286 0.352 0.297 0.265 0.242 0.280 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± 0.025 0.046 0.032 0.038 0.052 0.038 0.089 0.030 0.025 0.064 0.023 0.038 0.049 0.025 0.041 0.042 0.776 0.540 0.500 0.490 0.440 0.651 0.575 0.545 0.782 0.252 0.762 0.598 0.575 0.990 0.735 0.700 Figure 4: Analysis regarding secondary structure of generated proteins. (A) Statistics of averaged proportions of secondary structures for proteins from different methods and PDB; (B) Secondary structure vs. designability; (C) Samples of Multiflow, PDB and DPLM-2, as well as their secondary structure distributions. of ablation study, please refer to §A.6. ', 'original_lines': ' → 7 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 - sequence foldabilityACB- structure diversityscTMScore, ↑scRMSD, ↓- designability: structure-sequence compatibility FHLength: 100scRMSD: 0.53scTM: 0.97Length: 200scRMSD: 0.70scTM: 0.99Length: 300scRMSD: 0.95scTM: 0.98Length: 400scRMSD: 0.81scTM: 0.99Length: 600scRMSD: 3.45scTM: 0.99Length: 700scRMSD: 7.20scTM: 0.98Length: 500scRMSD: 1.30scTM: 0.97pLDDT, ↑#cluster, ↑- structure noveltypdb-TM, ↓- comparison wrt model size- long protein generation ED- case study of structure-sequence co-generated samplesI - showcase of designing symmetric oligomers Under review as a conference paper at ICLR 2025 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 Figure 4: Evaluation of secondary structure of generated proteins. from PDB. Compared to MultiFlow (Campbell et al., 2024), DPLM-2 achieves comparable co- generation quality. Notably, Multiflow’s performance degrades greatly without data distillation from external inverse folding models, while we also provide the result of Multiflow retrained using our training data for reference. We also notice that ESM3 (Hayes et al., 2024), which runs in a sequence-then-structure order, fails short of unconditional generation. Moreover, DPLM-2 can also only produce single modality if needed, where it matches the best competitive models in these settings (3) DPLM-2 generates longer proteins beyond training data. We sample proteins at lengths without suffering from catastrophic forgetting, demonstrating its capability of length extrapolation. of ablation study, please refer to §A.1. ', 'after_paragraph_idx': 21, 'before_paragraph_idx': 21}, {'section': 'Abstract', 'after_section': None, 'context_after': 'The goal of inverse folding is to find an amino acid sequence that can fold to a given backbone structure. For evaluation, we employ amino acid recovery (AAR) for sequence evaluation, and we also assess the structure by self-consistency TM-score (scTM) between the native structure and the ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'protein sequences, which can be transferred and leveraged into structure modeling. 4.3 ', 'modified_lines': '', 'original_lines': 'INVERSE FOLDING (STRUCTURE-CONDITIONED SEQUENCE GENERATION) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTSIn this section, we evaluate DPLM-2 on various generative and understanding scenarios, includingunconditional protein generation (structure, sequence, and structure-sequence co-generation, §4.1),and a variety of conditional tasks, such as folding (§4.2), inverse folding (§4.3) and motif-scaffolding(§4.4), and a series of protein predictive tasks (§4.5).', 'after_section': '4 EXPERIMENTSIn this section, we evaluate DPLM-2 on various generative and understanding scenarios, includingunconditional protein generation (structure, sequence, and structure-sequence co-generation, §4.1),and a variety of conditional tasks, such as folding (§4.2), inverse folding (§4.3) and motif-scaffolding(§4.4), and a series of protein predictive tasks (§4.5).', 'context_after': 'DPLM-2 (150M) 45.22/46.12 0.87/0.93 48.83/47.96 0.89/0.95 DPLM-2 (650M) 49.01/50.10 0.88/0.93 54.80/53/07 0.91/0.96 52.36/53.72 0.89/0.95 61.67/57.91 0.92/0.96 DPLM-2 (3B) 32.28/33.58 0.87/0.94 37.74/37.59 0.94/0.96 47.06/46.24 0.90/0.95 49.50/49.42 0.94/0.97 Table 4: Comparison on inverse folding task. MultiFlow ESM3 CAMEO 2022 PDB date split Models ', 'paragraph_idx': 21, 'before_section': '4 EXPERIMENTSIn this section, we evaluate DPLM-2 on various generative and understanding scenarios, includingunconditional protein generation (structure, sequence, and structure-sequence co-generation, §4.1),and a variety of conditional tasks, such as folding (§4.2), inverse folding (§4.3) and motif-scaffolding(§4.4), and a series of protein predictive tasks (§4.5).', 'context_before': 'task. We suggest that multimodal training effectively aligns the structure and sequence into the same space, such that DPLM-2 can yield the corresponding sequence without additional training. 4.4 SCAFFOLDING WITH MIXED-MODAL MOTIF CONDITIONING ', 'modified_lines': 'The objective of motif-scaffolding is to generate a suitable scaffold to preserve the structure of the given motif and maintain its original function. We follow the experimental setting of Yim et al. (2024), with 24 motif-scaffolding problems and we sample 100 scaffolds for each motif, where we (1) first determine the length of scaffold, and then (2) keep the motif segment unchanged and sample the scaffold part conditioned on the motif. The scaffold length is sampled from a range provided by Yim et al. (2024), and when there are multiple motifs, the order of motif segments is consistent with Yim et al. (2024). We provide the 3D structure and sequence of motif as input of DPLM-2. As a multimodal model, we evaluate DPLM-2 using sequence-based, structure-based, and co-generation approaches. A scaffold is considered successful if it satisfies both criteria (1) overall designablity, which is successful when pLDDT > 70 (for sequence-based models) or scTM > 0.8, and (2) motif-preseving, which is deemed successful when the predicted motif structure matches the native one with motif-RMSD <1 ˚A. 9.22/7.64 w/ folding SFT 7.66/4.37 7.37/4.89 w/ folding SFT 6.21/3.78 6.34/3.65 w/ folding SFT 5.71/3.23 INVERSE FOLDING (STRUCTURE-CONDITIONED SEQUENCE GENERATION) 8.35/5.60 6.00/3.41 5.67/3.33 3.40/1.78 4.54/2.54 3.15/1.69 0.75/0.81 0.80/0.86 0.79/0.86 0.84/0.89 0.83/0.89 0.85/0.90 0.76/0.82 0.83/0.88 0.83/0.88 0.89/0.94 0.86/0.92 0.90/0.95 DPLM-2 (650M) DPLM-2 (3B) ', 'original_lines': 'The objective of motif-scaffolding is to generate a suitable scaf- fold to preserve the structure of the given motif and maintain its original function. We follow the experimental setting of Yim et al. (2024), with 24 motif-scaffolding problems and we sam- ple 100 scaffolds for each motif, where we (1) first determine the length of scaffold, and then (2) keep the motif segment unchanged and sample the scaffold part conditioned on the motif. The scaffold length is sampled from a range provided by Yim et al. (2024), and when there are multiple motifs, the order of motif segments is consistent with Yim et al. (2024). We provide the 3D structure and sequence of motif as input of DPLM-2. As a multimodal model, we evaluate DPLM-2 using sequence-based, structure-based, and co-generation approaches. A scaffold is considered successful if it satisfies both criteria (1) overall designablity, which is successful when pLDDT > 70 (for sequence-based models) or scTM > 0.8, and (2) motif-preseving, which is deemed successful when the predicted motif structure matches the native one with motif-RMSD <1 ˚A. Figure 5: Evaluation of motif- scaffolding w.r.t. success rate and num. of solved problems. ', 'after_paragraph_idx': 22, 'before_paragraph_idx': 21}, {'section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'after_section': None, 'context_after': 'Justas Dauparas, Ivan Anishchenko, Nathaniel Bennett, Hua Bai, Robert J Ragotte, Lukas F Milles, Basile IM Wicky, Alexis Courbet, Rob J de Haas, Neville Bethel, et al. Robust deep learning–based protein sequence design using proteinmpnn. Science, 378(6615):49–56, 2022. ', 'paragraph_idx': 41, 'before_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_before': 'flows on discrete state-spaces: Enabling multimodal flows with applications to protein co-design. arXiv preprint arXiv:2402.04997, 2024. ', 'modified_lines': 'Hongrui Chen and Lexing Ying. Convergence analysis of discrete diffusion model: Exact implemen- tation through uniformization. arXiv preprint arXiv:2402.08095, 2024. Sitan Chen, Sinho Chewi, Jerry Li, Yuanzhi Li, Adil Salim, and Anru R Zhang. Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. In International Conference on Learning Representations, 2023. Alexander E Chu, Jinho Kim, Lucy Cheng, Gina El Nesr, Minkai Xu, Richard W Shuai, and Po-Ssu Huang. An all-atom protein generative model. Proceedings of the National Academy of Sciences, 121(27):e2311500121, 2024. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 40}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Chloe Hsu, Robert Verkuil, Jason Liu, Zeming Lin, Brian Hie, Tom Sercu, Adam Lerer, and Alexander Rives. Learning inverse folding from millions of predicted structures. In Kamalika Chaudhuri, ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forr´e, and Max Welling. Argmax flows and multinomial diffusion: Learning categorical distributions. Advances in Neural Information Processing Systems, 34:12454–12465, 2021. ', 'modified_lines': '', 'original_lines': ' 11 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 Under review as a conference paper at ICLR 2025 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'after_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_after': 'Guillaume Huguet, James Vuckovic, Kilian Fatras, Eric Thibodeau-Laufer, Pablo Lemos, Riashat Islam, Cheng-Hao Liu, Jarrid Rector-Brooks, Tara Akhound-Sadegh, Michael Bronstein, et al. Sequence-augmented se (3)-flow matching for conditional protein backbone generation. arXiv ', 'paragraph_idx': 57, 'before_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_before': 'and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. ', 'modified_lines': 'Fei Huang, Tianhua Tao, Hao Zhou, Lei Li, and Minlie Huang. On the learning of non-autoregressive transformers. In International Conference on Machine Learning, pp. 9356–9376. PMLR, 2022. ', 'original_lines': '', 'after_paragraph_idx': 58, 'before_paragraph_idx': 56}, {'section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'after_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_after': 'Yeqing Lin and Mohammed AlQuraishi. Generating novel, designable, and diverse protein structures by equivariantly diffusing oriented residue clouds. arXiv preprint arXiv:2301.12485, 2023. ', 'paragraph_idx': 67, 'before_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_before': 'novo protein design. bioRxiv, pp. 2022–07, 2022. ', 'modified_lines': 'Mingxiao Li, Tingyu Qu, Ruicong Yao, Wei Sun, and Marie-Francine Moens. Alleviating exposure bias in diffusion models through sampling with shifted time steps. arXiv preprint arXiv:2305.15583, 2023. ', 'original_lines': '', 'after_paragraph_idx': 68, 'before_paragraph_idx': 66}, {'section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'after_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_after': 'Matthew McDermott, Brendan Yap, Harry Hsu, Di Jin, and Peter Szolovits. Adversarial contrastive ', 'paragraph_idx': 74, 'before_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_before': 'Jose Luis Olmos Jr, Caiming Xiong, Zachary Z Sun, Richard Socher, et al. Deep neural language modeling enables functional protein generation across families. bioRxiv, pp. 2021–07, 2021. ', 'modified_lines': 'Weian Mao, Muzhi Zhu, Zheng Sun, Shuaike Shen, Lin Yuanbo Wu, Hao Chen, and Chunhua Shen. De novo protein design using geometric vector field networks. arXiv preprint arXiv:2310.11802, 2023. ', 'original_lines': '12 Under review as a conference paper at ICLR 2025 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 ', 'after_paragraph_idx': 75, 'before_paragraph_idx': 73}, {'section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'after_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_after': 'Brian L Trippe, Jason Yim, Doug Tischer, David Baker, Tamara Broderick, Regina Barzilay, and Tommi Jaakkola. Diffusion probabilistic modeling of protein backbones in 3d for the motif- ', 'paragraph_idx': 95, 'before_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_before': 'language modeling with structure-aware vocabulary. bioRxiv, pp. 2023–10, 2023. ', 'modified_lines': 'Haotian Tang, Yecheng Wu, Shang Yang, Enze Xie, Junsong Chen, Junyu Chen, Zhuoyang Zhang, Han Cai, Yao Lu, and Song Han. Hart: Efficient visual generation with hybrid autoregressive transformer. arXiv preprint arXiv:2410.10812, 2024. ', 'original_lines': '13 Under review as a conference paper at ICLR 2025 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 ', 'after_paragraph_idx': 96, 'before_paragraph_idx': 94}, {'section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'after_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_after': 'Kevin E Wu, Kevin K Yang, Rianne van den Berg, James Y Zou, Alex X Lu, and Ava P Amini. Protein structure generation via folding diffusion. arXiv preprint arXiv:2209.15611, 2022a. ', 'paragraph_idx': 106, 'before_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_before': 'Woody Ahern, Andrew J Borst, Robert J Ragotte, Lukas F Milles, et al. De novo design of protein structure and function with rfdiffusion. Nature, 620(7976):1089–1100, 2023. ', 'modified_lines': 'Chengyue Wu, Xiaokang Chen, Zhiyu Wu, Yiyang Ma, Xingchao Liu, Zizheng Pan, Wen Liu, Zhenda Xie, Xingkai Yu, Chong Ruan, et al. Janus: Decoupling visual encoding for unified multimodal understanding and generation. arXiv preprint arXiv:2410.13848, 2024. ', 'original_lines': '', 'after_paragraph_idx': 107, 'before_paragraph_idx': 105}, {'section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'after_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_after': 'Jason Yim, Brian L Trippe, Valentin De Bortoli, Emile Mathieu, Arnaud Doucet, Regina Barzilay, and Tommi Jaakkola. Se (3) diffusion model with application to protein backbone generation. arXiv preprint arXiv:2302.02277, 2023. Jason Yim, Andrew Campbell, Emile Mathieu, Andrew YK Foong, Michael Gastegger, Jos´e Jim´enez- Improved ', 'paragraph_idx': 113, 'before_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_before': 'protein representation learning. bioRxiv, pp. 2022–05, 2022b. ', 'modified_lines': 'Kai Yi, Bingxin Zhou, Yiqing Shen, Pietro Lio, and Yu Guang Wang. Graph denoising diffusion for inverse protein folding. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=u4YXKKG5dX. ', 'original_lines': ' 14 Under review as a conference paper at ICLR 2025 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 ', 'after_paragraph_idx': 114, 'before_paragraph_idx': 112}, {'section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'after_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_after': 'A DPLM-2 TRAINING DPLM-2 takes the discrete structure token sequence and amino acid token sequence as input. As demonstrated in Fig. 1, we concatenate the two sequences into one sequence of double length. DPLM-2 employs an efficient warm-up strategy by initializing with pre-trained sequence-based DPLM (§3.2) to leverage the evolutionary information learned by DPLM for protein structure We introduce a distinct scheduler to control the noise level of structure and sequence flexibly during training (§3.1). Different combinations of structures and sequence schedulers (denoted as tz and ts, respectively) imply training for different applications. Specifically, we mainly focus on ', 'paragraph_idx': 123, 'before_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_before': 'Antigen-specific antibody design via direct energy-based preference optimization. Advances in neural information processing systems, 2024. ', 'modified_lines': 'Yiheng Zhu, Jialu Wu, Qiuyi Li, Jiahuan Yan, Mingze Yin, Wei Wu, Mingyang Li, Jieping Ye, Zheng Wang, and Jian Wu. Bridge-if: Learning inverse protein folding with markov bridges. arXiv preprint arXiv:2411.02120, 2024. 16 Published as a conference paper at ICLR 2025 IMPLEMENTATION DETAILS A.1 modeling. Considering that the vocabulary of DPLM only consists of amino acids, we expand the vocabulary of DPLM-2 with discrete structure tokens. The embeddings for these new tokens are initialized using the mean and standard deviation of the learned amino acid embeddings. This embedding initialization keeps the distributional statistics of the embedding space consistent with the pre-trained DPLM, ensuring stable early-stage training (for learning structure-sequence alignment) and reducing the risk of extreme gradients that could cause training instability. A.2 DISTINCT NOISE SCHEDULER OF TRAINING ', 'original_lines': 'A.1 ABLATION STUDY In DPLM-2 training, we start with a warmup from the sequence-based pre-trained DPLM to exploit established evolutionary information and augment the data with high-quality AlphaFold-predicted structures from SwissProt (around 200K) and clustered PDB structures. This section evaluates the effects of sequence pre-training and data augmentation on unconditional protein generation. Table 7: Ablation study on the sequence pre-training and training data augmentation. sequence pre-training synthetic structures ✗ ✓ ✗ ✓ ✗ ✗ ✓ ✓ length 100 length 200 length 300 length 400 length 500 scTM clusters scTM clusters scTM clusters scTM clusters scTM clusters 0.9241 0.9610 0.8988 0.9348 20 26 27 35 0.8674 0.9349 0.9182 0.9428 34 47 15 40 0.7667 0.9169 0.9343 0.9232 33 38 13 48 0.5016 0.8643 0.8518 0.9260 25 35 21 40 0.4511 0.7673 0.8288 0.9012 25 52 31 32 We investigate the effect of sequence pre-training by randomly initializing DPLM-2 instead of using DPLM parameters, while for effect of synthetis structures we leverage PDB structures only for training. We conduct experiments on 150M DPLM-2, for each DPLM-2 variant we sample 100 examples for each length in 100, 200, 300, 400 and 500. We compute scTM and the number of difference clusters in each length. Tab. 7 demonstrates that sequence pre-training and data augmentation can significantly improve the designability and diversity, especially in generating long proteins (length > 300). We hypothesize that the limited number of long proteins in PDB leads to insufficient training. In contrast, sequence pretraining, which includes evolutionary data, is essential and can be transferred to improve protein structure modeling and generation quality. Additionally, this evolutionary information boosts sampling diversity. While increasing the amount of training data improves designability, it is less effective in enhancing diversity compared to sequence pretraining. By combining both strategies, we achieve the best overall performance, which forms the core of our training strategy. A.2 SELF-MIXUP TRAINING STRATEGY We find that discrete diffusion training will face the exposure bias problem (Ranzato et al., 2016; Bengio et al., 2015), which means mismatch between training and inference. The model is trained to denoise given the ground-truth context during training. However, during inference, the model needs to denoise based on the predicted tokens, which may not be correct and inconsistent with the always-accurate context during training. This may lead to error accumulation and negatively impact the generation performance. To address this issue, we propose a self-mixup training paradigm for discrete diffusion model, enhancing the consistency between training and inference. During training, we perform an additional forward pass, allowing the model to first make predictions and then denoise based on those predictions. 15 Under review as a conference paper at ICLR 2025 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 Tab. 8 shows that the self-mixup training strategy effectively enhances the diversity of samples. We attribute this to the model producing more accurate logits during inference, leading to more diverse reasonable sampling paths instead of converging on the sampling paths with the highest probability, which results in more diverse proteins. Table 8: Ablation study on the self-mixup training strategy. Mixup strategy ✗ ✓ length 100 length 200 length 300 length 400 length 500 scTM clusters scTM clusters scTM clusters scTM clusters scTM clusters 0.9237 0.8812 44 62 0.9180 0.8820 53 62 0.9147 0.9172 48 59 0.9059 0.9099 42 54 0.8896 0.8845 33 38 A.3 IMPLEMENTATION DETAILS modeling. Considering that the vocabulary of DPLM only consists of amino acids, we expand the vocabulary of DPLM-2 with discrete structure tokens. We initialize the embeddings of structure tokens with the mean and standard variation of the learned amino acid embeddings. We hypothesis this will keep the initial embedding distribution remains consistent with the pre-trained DPLM, resulting in more stable training in early stage and preventing excessive gradients that could lead to training crashes. A.4 DISTINCT NOISE SCHEDULER OF TRAINING ', 'after_paragraph_idx': 124, 'before_paragraph_idx': 122}, {'section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'after_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_after': 'The training set of DPLM-2 is composed by experimental data, i.e., PDB (Berman et al., 2000), and high quality synthetic data, i.e., SwissProt (Varadi et al., 2022). We filter the SwissProt data by pLDDT > 85. After filtering, the overall training set contains approximately 200,000 proteins. We ', 'paragraph_idx': 124, 'before_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_before': 'experiment, the proportion for each task is the same, which is 20%. After training, we can further enhance a specific generation task by supervised finetuning (SFT). This involves continuing training for the specific task with a proportion of 100%, while the proportion for other tasks is set to 0%. For ', 'modified_lines': 'example, in Tab. 3, the folding supervised finetuning is performed by continue training with folding task based on a pre-trained DPLM-2 with 100% proportion of training data. A.3 HYPERPARAMETER We train all models using AdamW optimizer (Kingma & Ba, 2015) with β1 = 0.9 and β2 = 0.95. We use a weight decay of 0.01 and gradient clipping of 0.5. We employ 2K warmup steps until reaching the maximum learning rate, and utilize a linear decay scheduler to decay LR to 10% of the maximum learning rate by the end of training. The maximum learning rate is 1e-4, and the overall training step is 100,000. We utilize the pretrained DPLM as the parameter initialization, and the diffusion timestep is set to 500. We train 150M DPLM-2 with 8 A100 GPUs for 3 days, while 650M with 16 A100 GPUs for 3 days and 3B with 16 A100 GPUs for a week. A.4 DATASET ', 'original_lines': 'example, in Tab. 3, the folding supervised finetuning is performed by continue training based on a pre-trained DPLM-2 with 100% proportion of training data. A.5 DATASET ', 'after_paragraph_idx': 124, 'before_paragraph_idx': 124}, {'section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'after_section': None, 'context_after': 'scaffolding. Sequence-based, structure-based and co-generation evaluation pipeline of motif- sequence-based ', 'paragraph_idx': 124, 'before_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_before': 'where the number of proteins with length less than 100 is relatively small, leading to a suboptimal diversity among the short proteins. Therefore, during training, we randomly crop long proteins to short proteins with a probability of 50% for each batch to improve the diversity. ', 'modified_lines': 'A.5 TACKLING EXPOSURE BIAS IN DISCRETE DIFFUSION WITH SELF-MIXUP TRAINING STRATEGY The exposure bias problem, which is described as the input mismatch between training and sampling, has already garnered attention in the research of continuous diffusion (Ning et al., 2023a;b; Li et al., 17 Published as a conference paper at ICLR 2025 2023) and NLP (Ranzato et al., 2016; Bengio et al., 2015). We find that the discrete diffusion model also encounters this issue. According to the Eq.1, the model is trained to be tasked with pθ(x(0) x(t)), | essentially doing masked-prediction. During training, the model makes prediction conditioned on the x(t), which is a mixture of ground-truth and mask tokens as noise: x(t) = αtx(0) + (1 αt)qnoise. However, during inference, the model predicts pθ(x(0) ˆx(t)) conditioned on the previously generated sample ˆx(t), which is a mixture of model prediction and masks, essentially requiring denoising and masked-prediction. The difference between x(t) and ˆx(t) causes a discrepancy between pθ(x(0) x(t)) | and pθ(x(0) ˆx(t)), potentially leading to error accumulation since the model trend to be over-confident | on its predictions (as in training the model is always exposed to ground-truth, hence the name exposure bias), and negatively impacting the generation performance. To mitigate this, we propose to bridge this gap by training model to make predictions conditioned on its own predicted results: (1) predict ˆx(0) conditioned on the ground truth sample x(t), (2) construct the generated sample: ˆx(t) αt)qnoise, (3) compute self-mixup loss according to Eq. 1: ˆx(0) + (1 − | i i i i − λ(t) ← ˆ Jt = Eq(x(0)) bi(t) · log pθ(x(0) i ˆx(t)) | L i (cid:88)1 ≤ ≤ We can illustrate this more clearly with a break-down example. Let the ground-truth x(0) be ABCDE, and the x(t) be [m][m][m]DE as in masked discrete diffusion, where [m] represents the mask token. The training process is as below: (i) Call a model forward to obtain model prediction ˆx(0), which is abcDE (with the ground truth token DE preserved for masked positions), where abc represent model prediction by argmax decoding. (ii) Construct self-mixup ˆx(t). In our experiments, we always replace the ground-truth token in ˆx(0) (DE in this case) with the mask token [m]. Therefore ˆx(t) becomes abc[m][m]. (iii) Compute self-mixup loss, which is essentially cross entropy loss between pθ(x(0) i ˆx(t)) and | x(0) at all positions. More specifically, this can be seen as mask positions are applied masked language modeling loss while non-masked positions are applied denoising autoencoder loss. Moreover, this also improves sample-efficiency compared to typical masked discrete diffusion where training loss is only applied to mask positions. In our experiments, we first train DPLM-2 with the original loss in Eq. 1 for 50,000 steps to ensure the prediction quality. This step is crucial; otherwise, the model’s predictions might be poor, leading to an excessively large self-mixup loss and causing training instability. After this initial phase, we continue training with self-mixup loss to mitigate the exposure bias issue. Tab. 6 shows that the self-mixup training strategy effectively enhances the diversity of samples. We conduct experiments with the DPLM-2 650M model on the unconditional generation task. We sample 100 proteins within each length interval and calculate scTM for structure-sequence compatibility and the number of clusters for diversity. We attribute this to the model producing more accurate logits during inference, leading to more diverse reasonable sampling paths instead of converging on the sampling paths with the highest probability, which results in more diverse proteins. Table 6: Ablation study on the self-mixup training strategy. Mixup strategy ✗ ✓ length 100 length 200 length 300 length 400 length 500 scTM clusters scTM clusters scTM clusters scTM clusters scTM clusters 0.9237 0.8812 44 62 0.9180 0.8820 53 62 0.9147 0.9172 48 59 0.9059 0.9099 42 54 0.8896 0.8845 33 38 A.6 ABLATION STUDY ON THE SEQUENCE PRE-TRAINING AND SYNTHETIC STRUCTURES In DPLM-2 training, we start with a warmup from the sequence-based pre-trained DPLM to exploit established evolutionary information and augment the data with high-quality AlphaFold-predicted structures from SwissProt (around 200K) and clustered PDB structures. This section evaluates the effects of sequence pre-training and data augmentation on unconditional protein generation. We investigate the effect of sequence pre-training by randomly initializing DPLM-2 instead of using DPLM parameters, while for effect of synthetis structures we leverage PDB structures only for training. We conduct experiments on 150M DPLM-2, for each DPLM-2 variant we sample 18 Published as a conference paper at ICLR 2025 Table 7: Ablation study on the sequence pre-training and training data augmentation. sequence pre-training synthetic structures ✗ ✓ ✗ ✓ ✗ ✗ ✓ ✓ length 100 length 200 length 300 length 400 length 500 scTM clusters scTM clusters scTM clusters scTM clusters scTM clusters 0.9241 0.9610 0.8988 0.9348 20 26 27 35 0.8674 0.9349 0.9182 0.9428 34 47 15 40 0.7667 0.9169 0.9343 0.9232 33 38 13 48 0.5016 0.8643 0.8518 0.9260 25 35 21 40 0.4511 0.7673 0.8288 0.9012 25 52 31 32 Table 8: Results of codebook size, codebook utilization and lddt ca on the training set and valid set of different structure tokenizers. tokenizer codebook size codebook utilization VQ-VAE-1K LFQ-1K LFQ-2K LFQ-4K LFQ-8K LFQ-16K 1024 1024 1024 4096 8192 16384 63.50% 100.00% 100.00% 100.00% 99.50% 98.60% training set valid set (Cameo 2022) lddt ca lddt ca TM-score RMSD 0.76 0.82 0.84 0.86 0.92 0.92 0.71 0.77 0.79 0.82 0.86 0.87 0.80 0.86 0.88 0.91 0.93 0.94 6.14 4.35 3.62 3.31 2.58 2.32 100 examples for each length in 100, 200, 300, 400 and 500. We compute scTM and the number of difference clusters in each length. Tab. 7 demonstrates that sequence pre-training and data augmentation can significantly improve the designability and diversity, especially in generating long proteins (length > 300). We hypothesize that the limited number of long proteins in PDB leads to insufficient training. In contrast, sequence pretraining, which includes evolutionary data, is essential and can be transferred to improve protein structure modeling and generation quality. Additionally, this evolutionary information boosts sampling diversity. While increasing the amount of training data improves designability, it is less effective in enhancing diversity compared to sequence pretraining. By combining both strategies, we achieve the best overall performance, which forms the core of our training strategy. IMPLEMENTATION DETAILS B DISCUSSIONS ON THE STRUCTURE TOKENIZATION B.1 We utilize a GVP-based (Jing et al., 2020) structure encoder from pre-trained GVP-Transformer (Hsu et al., 2022) with its parameters frozen during training. The structure encoder transforms backbone structures into geometric features, which are projected onto a latent embedding using an MLP layer. The structure decoder follows the IPA-based modules from AlphaFold2 (Jumper et al., 2021), using 4 EvoFormer layers without MSA row attention, following ESMFold (Lin et al., 2022), to generate atomic positions from the structure tokens. We train structure tokenizer using the same structure data as our mulitmodal language model, containing both experimental and high-quality structures. The training objective of structure tokenizer includes reconstruction loss, codebook commitment loss, and entropy regularization loss to ensure effective codebook utilization. For the reconstruction loss, we adopt the FAPE loss, violation loss, and distogram loss from AlphaFold2 (Jumper et al., 2021), measuring the difference between predicted and native structures. To further enhance the training, we introduce a sequence prediction head on top of the structure decoder’s final representation and minimize the cross-entropy against the native sequence. B.2 THE UTILIZATION AND INTERPRETABILITY OF STRUCTURE TOKENS In the Fig. 2A we have shown the reconstruction accuracy and an interpretation analysis on the correspondence of structure tokens and local structural elements in terms of secondary structures. We also calculate the codebook utilization in Tab. 8. We find that LFQ-based tokenizers always achieve nearly 100% codebook utilization with more evenly distributed code usage, while vanilla VQ-VAE struggles with codebook collapse. The Fig. 2B demonstrates the interpretability of structure tokens with a informative simplex plot of structure tokens vs second structure. We can observe a strong correlation between a vast majority of the structure tokens and structured local environments, where a lot of structured tokens concentrate on the alpha helix and beta sheet vertices, while some tokens lie between regions or the loop vertice. There are also a subset of structure tokens having less clear clues to specific secondary structures. This suggests that structure tokens mostly capture clear secondary elements, some may correspond to structured local environments (in bewteen helic and sheet), while others could be high-level abstract entities. 19 Published as a conference paper at ICLR 2025 Figure 6: The histogram of num structural motifs v.s. motif clusters for each struc- ture token. We randomly sampling 500 out of 8,192 structure tokens for readability. Figure 7: Visualization of mapping structure tokens to structural motifs. B.3 VISUALIZATION OF MAPPING STRUCTURE TOKENS TO STRUCTURAL MOTIFS We provide more fine-grained insights into what structure tokens learn by mapping structure tokens to structural motifs. Specifically, as structure tokens are residue-wise representations, we aim to map each structure token to structural motifs defined as the nearest-neighbor local structural environment of a residue in the training dataset. For efficiency, we used only the PDB dataset. The process is as follow: (1) For each structure in the PDB dataset (approximately 20K in total), we first tokenize the structure into structure tokens and save the pair of structure tokens and 30-nearest-neighbors structural motifs for each residue. We use 30 nearest neighbors because the pre-trained GVPTransformerEncoder, which we used as the structure encoder, employs 30 nearest neighbors as the hyperparameter for geometric features. (2) After processing all structures, we obtain a table where each row corresponds to a structure token and its associated structural motifs (i.e., num structural motifs) (3) To analyze whether a structure token tends to occur in a similar local structural environment, we use Foldseek (TM-threshold = 0.5) to cluster the structural motifs for each structure token (i.e., motif clusters). Although Foldseek may not be entirely accurate in clustering such short and discontinuous structural regions, it provides a reasonable comparative sense of the similarity or difference among all structural motifs associated with each structure token. In Fig. 6, we plot the histogram of num structural motifs v.s. motif clusters for each structure token (randomly sampling 500 out of 8,192 structure tokens to ensure readability). From the visualization, we observe that many structure tokens correspond to highly similar structural motifs (evidenced by a small ratio of the number of motif clusters to the number of total structural motifs), while others exhibit a high degree of ambiguity. Additionally, we visualize the mapping between structure tokens and structural motifs in specific cases. In Fig. 7, we showcase two structure tokens and their corresponding similar structural motifs across four different PDB structures, illustrating the diversity or consistency in the mapped local structural environments. 20 struct_token: 0397struct_token: 3916 Published as a conference paper at ICLR 2025 B.4 LIMITATION ON THE STRUCTURE TOKENIZATION Our approach can be seen as decoupling the learning of the structurally invariant topology to the tokenizer and the language model, and the geometric reasoning on 3D coordinates to the structure decoder parameterized by triangular modules and IPAs from AlphaFold2 with FAPE loss. This shares the similarity to AF2 and ESMfold, where the sequence encoding modules like Evoformer (co-evolution encoding) and ESM (amino-acid encoding) provide invariant features (in the form of single and pair embeddings) to the structure decoder that learns to convert invariant features into 3D coordinates. The AF2-style structure decoder does not enforce strict equivariance to rigid transformations. Instead, it relies on the FAPE loss to ensure structural consistency, which minimizes coordinate errors in a manner that is invariant to global rotations and translations. As such, we suggest that the primary trade-off when using invariant structure tokens instead of 3D coordinates mainly lies in the potential loss of fine-grained structural details. Albeit being the key enabler to multimodal PLMs, structure tokenization is essentially clustering similar local structured environments, which results in lossy compression and the absence of fine-grained structural variation. The primary principle of the solution is that we need to ”recover” and preserve the high-frequency variation that gets lost during quantization. We propose some potential directions for mitigation: Separate structure encodings for DPLM-2. We can introduce different structure encoders for encoding and generation purposes, respectively. For parts of a protein where atomic coordinates are already provided, lossy tokenization may not be necessary. Instead, we can use robust features from powerful structure encoders like GVP-Transformer while continuing to use structure tokens for generating the remaining parts. To achieve this, the model can be trained to alternate between these two types of encodings. A similar approach has been applied successfully in recent vision-language MLLMs (Wu et al., 2024), as the vision-language community has also recognized that understanding and generation often require different types of representations. Modeling continuous structure features with hybrid tokenization. In the structure tokenizer, the vector quantizer module converts encoder features into discrete structure token features, but the residuals—differences between the original and quantized features—are lost, removing fine- grained structural details. To address this, we can using continuous generative modeling, such as diffusion/flow-based models, to learn to recover these residuals. This would work by conditioning on the structure tokens and possibly the final hidden states of DPLM-2. The protein structure generation process would involve first generating discrete structure tokens that capture the overall topology, then using those tokens to generate the missing residuals. These residuals would be added up to the structure token embeddings to recover a more complete and accurate structure representation, closer to the features produced by the structure encoder. This approach could significantly improve structure generation. By combining this idea with hybrid structure encodings, DPLM-2 could not only interpret given structures at atomic accuracy but also generate structures that include the missing fine-grained variations. Similar strategies have shown significant success in visual autoregressive generation with visual tokenizers (Tang et al., 2024). C ANALYSIS ON THE SAMPLING STRATEGY We utilize argmax decoding for conditional generation tasks (e.g., folding and inverse folding) to maximize generation accuracy and ensure a fair comparison with DPLM. On the other hand, stochas- tic sampling was employed for unconditional generation or motif-scaffolding tasks to encourage generation diversity while maintaining good generation quality. Specifically, we utilize a temperature-based stochastic approach. We mainly focus on the temperature- annealed version based on the sampling procedure of DPLM (Wang et al., 2024) for better sampling diversity. The overall sampling approach is shown in algorithm 1. The temperature annealing sampling approach introduces more randomness during the initial stage of sampling by using a large temperature, and more fidelity during the final stage of sampling by using a small temperature. This method improves generation diversity while maintaining generation quality. Moreover, we observe that stochastic sampling could also improve the generation diversity in conditional tasks (e.g., inverse folding) while keeping the quality. As shown in Tab. 9, the argmax decoding strategy picks the token with highest probability at each timestep, yielding sequence with high probability and resulting in high amino acid recovery (AAR). On the other hand, we employ a sampling strategy with annealing temperature from 2.2 to 0.1 to improve diversity, and the generated sequence has a lower AAR while maintaining the same scTM as argmax decoding. This demonstrates that temperature annealing sampling strategy is capable of generating more diverse sequences that, while not similar to the ground truth, still meet the given structural conditions. 21 Published as a conference paper at ICLR 2025 Table 9: Ablation study on the sampling strategy in inverse folding task. Model CAMEO 2022 AAR scTM MultiFlow ESM3-Open DPLM2 650M (argmax decoding) DPLM2 650M (temperature-annealed sampling) 32.28/33.58 47.06/46.24 49.01/50.10 43.15/42.24 0.87/0.94 0.90/0.95 0.88/0.93 0.88/0.93 Algorithm 1 Temperature-annealed stochastic sampling Input: trained network fθ ( ), the total sampling steps T , the minimum temperature τmin and the · maximum temperature τmax. Output: generated sample x(0). for n = 1, 2, . . . , N do Initialize xT,n ∼ Initialize bT,n = 0; qnoise; end for for t = T, . . . , 1 do 1 # Determine the temperature τ of the current timestep t. τ = τmin + t τmin) 1 (τmax − − T − for n = 1, 2, . . . , N do x0,n ∼ Draw Generate vt if bt,n = 1 then (cid:101) Draw u(1) xt Categorical (fθ (xt,n)/τ ); x0,n) 1,n according to log p( qnoise; 1,nxt,n + u(1) t,n; t,n ∼ 1,n = v(1) t − − (cid:101) v(1) t − 1,n 1 − − else t,n ∼ 1,n = v(2) t − Draw u(2) xt end if Let bt − 1,n = bt,n ∧ − (cid:16) qnoise(xt,n); x0,n + 1,n (cid:16) 1,n ∨ v(1) (cid:101) t − end for (cid:17) v(2) t − 1,n x(2) t,n; (cid:17) 1 − v(2) t − 1,n; end for Return x0,1:N . D ANALYSIS ON THE EVALUATION METRICS IN THE UNCONDITIONAL GENERATION In Tab. 2, we observe that DPLM-2 achieves a high scTM score while the scRMSD score is a bit higher than other baselines, e.g., MultiFlow. We will make a detailed discussion of this. We first highlight that the generated samples from DPLM-2 share similar scTM (0.925) and scRMSD (3.9) as native PDB samples, which also exhibit good scTM (0.904) with a little bit higher scRMSD (4.623). Moreover, Additionally, DPLM-2 maintains a balanced structural composition (helix: 0.4, strand: 0.2, coil: 0.45), closely resembling natural distributions. In contrast, for MultiFlow, the officially released model with distillation attains much lower scRMSD (3.2), while the performance of our retrained version (on the same DPLM-2 training set) degrades in both scTM (0.871) and scRMSD (6.58). Lower scRMSD in MultiFlow with distillation, appears to be driven by overrepresentation of structured elements (Fig. 4A), i.e., significantly biasing towards proteins with more helices, with less strands and loops (Fig. 4C). This overrepresentation drives the observed scRMSD improvement but deviates from natural protein diversity. Then we delve into the insight and purpose of the TM-score and RMSD metrics. TM-score emphasizes global topology, while RMSD is sensitive to local structural errors. As such, although scTM and scRMSD are generally correlated, discrepancies can arise. The purpose of TM-score is to solve this sensitivity of RMSD, because RMSD is an average distance of all residue pairs in two structures, a local error (e.g. a misorientation of the tail) will raise a big RMSD value although the global topology is correct. In TM-score, however, the small distance is weighted stronger than the big distance, which makes the score insensitive to the local modeling error. As shown in Fig. 4B, some samples 22 Published as a conference paper at ICLR 2025 Table 10: Analysis on the performance degradation in the representation learning task, including HumanPPI, MetalIonBinding and DeepLoc Subcellular. Exp id Model Training set 0 1 2 3 4 5 SaProt DPLM DPLM-2 DPLM w/ fully pretraining on UniRef50 UniRef50 (45M) DPLM-2 w/ finetuning from DPLM DPLM-2 w/ finetuning from DPLM AFDB data (40M) PDB + swissprot (200K) PDB + swissprot (200K) PDB + swissprot (200K) AFDB reps + PDB + swissprot (1.5M) 86.41 73.33 77.22 86.41 84.44 87.78 HumanPPI MetalIonBinding DeepLoc Subcellular Acc(%) Acc(%) 75.75 62.25 69.47 75.15 74.28 - Acc(%) 85.57 63.49 66.77 84.56 82.98 83.42 from DPLM-2 with higher loop proportion are more conformationally flexible, hence may show high scTM (>0.9) but worse scRMSD (>2.0), similar to natural protein. However, this does not necessarily indicate a limitation in generation quality but reflects differences in metric sensitivity. As a result, the in-silico designability of protein generation should be evaluated comprehensively using both scTM and scRMSD, as each metric offers distinct insights and serves different purposes. For users aiming to generate samples with accurate global topology, scTM serves as a reliable indicator, whereas scRMSD may occasionally exclude reasonable structures. Conversely, for applications requiring structurally rigid and stable proteins, such as functional designs (e.g., binder design), scRMSD has been shown to correlate more strongly with in vitro success rates, as suggested by RFDiffusion. E ANALYSIS ON THE PERFORMANCE DEGRADATION IN REPRESENTATION LEARNING In Tab. 5, we find DPLM-2 demonstrates a performance degradation compared with the DPLM, which is used for parameter initialization for DPLM-2, in some tasks (e.g., DeepLoc Subcellular). We hypothesize two potential causes for the observed degradation: (1) DPLM-2 needs to accommodate additional structural representations given the same model capacity (parameters), which could negatively impact the representation learning performance. (2) As continuous training on smaller magnitude of structure data, DPLM-2 may experience catastrophic forgetting of the representation power gained during DPLM’s large-scale sequence pretraining. To explore (1), we eliminated pretraining factors by retraining both DPLM and DPLM-2 with random initialization on the SwissProt and PDB datasets for 100K training steps. Additionally, we evaluated performance across all three tasks (HumanPPI, MetalIonBinding and DeepLoc Subcellular) where DPLM-2 underperformed compared to DPLM. As shown in the Tab. 10, when large-scale sequence pretraining is removed, DPLM-2 significantly outperforms DPLM (exp 2 vs exp 1). This indicates that incorporating structural information enhances performance rather than harming it, which rejects the hypothesis (1). However, when DPLM undergoes large-scale pretraining and DPLM-2 is subsequently trained from the pretrained DPLM, the performance of DPLM-2 on certain tasks diminishes (exp 4 vs exp 3). Given the relatively smaller structure data for DPLM-2 training, this suggests that catastrophic forgetting occurs during DPLM-2’s multimodal training, reducing the advantages of large-scale pretraining. To verify and mitigate this, we curate additional 1.3M predicted structures from AFDB representative (Barrio-Hernandez et al., 2023), and trained DPLM-2 on this larger data. The experimental results show that the amount of structure data is indeed a key factor for better multimodal protein representations, leading to significantly improved performance over the original data (exp 5 vs exp 4). In particular, on HumanPPI, enlarging data from 200K to 1.5M helps DPLM-2 attain 2.3% improvement, and also outperforms SaProt, a strong multimodal PLM trained with 40M Foldseek tokenized AFDB data. F MORE EMPIRICAL RESULTS F.1 COMPREHENSIVE EVALUATION ON THE UNCONDITIONAL SEQUENCE GENERATION In addition to the Tab. 2, we have conducted more comprehensive evaluations on the unconditional generation in terms of protein sequence, including: (1) sequence and structural diversity: we conduct MMseqs2 clustering and Foldseek structural clustering at different thresholds. For MMseqs2 clustering, we cluster samples with pLDDT > 70, while for foldseek clustering we cluster samples with scTM > 0.5. This quality threshold for diversity is inspired by MultiFlow, which is more informative by avoiding diverse but messy sequences. Then, we divide the number of clusters by the total number of samples to measure the diversity. (2) sequence naturalness: we calculate perplexity 23 Published as a conference paper at ICLR 2025 Table 11: Comprehensive analysis on the protein sequence generation. We evaluate the performance in terms of pLDDT, sequence and structural diversity, sequence naturalness and sequence novelty. Evaluation Metric structural plausibility (↑) pLDDT sequence diversity (↑) MMseqs2 cluster at seq-id=0.3 & plddt > 70 MMseqs2 cluster at seq-id=0.5 & plddt > 70 MMseqs2 cluster at seq-id=0.7 & plddt > 70 MMseqs2 cluster at seq-id=0.9 & plddt > 70 structural diversity (↑) Foldseek at TMscore=0.3 & scTM > 0.5 Foldseek at TMscore=0.5 & scTM > 0.5 Foldseek at TMscore=0.7 & scTM > 0.5 Foldseek at TMscore=0.9 & scTM > 0.5 sequence naturalness (↓) ProGen2 ppl sequence novelty (↓) MMseq2 search against PDB+swissprot MultiFlow MultiFlow (official w/ distillation) (retrained on DPLM-2 data) DPLM DPLM-2 79.4 0.804 0.860 0.862 0.862 0.030 0.500 0.962 0.990 62.6 0.204 0.294 0.294 0.294 0.080 0.440 0.830 0.910 84.0 83.7 0.740 0.745 0.815 0.885 – – – – 0.745 0.755 0.795 0.895 0.198 0.545 0.646 0.746 8.11 ± 2.08 9.15 ± 2.77 4.33 ± 2.51 4.08 ± 2.00 0.306 0.312 0.304 0.475 Table 12: Unconditional generation from the empirical length distribution. pLDDT scRMSD scTM Length Length interval: [100, 200, ..., 500] Training set (PDB+Swissprot) length dist. 0.925 0.929 0.085 0.086 3.899 3.967 3.723 3.257 ± ± ± ± 82.686 83.698 as a measure of naturalness with ProGen2-large (Nijkamp et al., 2022). (3) sequence novelty: we calculate novelty through sequence identity to the nearest neighbor in the training set. All models generate 100 samples per length in the range of 100, 200, 300, 400 and 500 for evaluation, with the results demonstrated in the Tab. 11. One particularly insightful observation is the distinct behavior of MultiFlow (w/ distillation) and DPLM-2 regarding structural diversity. Specifically, DPLM-2 exhibits greater diversity under strict TM-score thresholds ( 0.5), while MultiFlow achieves better diversity at higher TM-score thresholds ( 0.7). Combined with the average inner-TM scores (DPLM-2: 0.275, MultiFlow: 0.356) , this suggests that DPLM-2 excels at generating diverse structures in terms of global topologies but exhibits limited structural variation within each cluster. This finding highlights a key limitation of the current structural tokenization approach: the loss of fine-grained structural variations, emphasizing the need for future improvements in this area. Additionally, DPLM-2 achieves the lowest ProGen2 perplexity, while its sequence identity to training data (0.475) is higher than that of DPLM and MultiFlow. This indicates that the sequences generated by DPLM-2 align more closely with the natural distribution. ≥ ≤ F.2 UNCONDITIONAL GENERATION FROM THE EMPIRICAL LENGTH DISTRIBUTION In our paper, we follow the setting in the MultiFlow and sample within length intervals in the unconditional generation, ensuring fair comparisons with previous models under the similar settings to better assess the strengths and limitations of our models. Meanwhile, DPLM-2 is capable of generating proteins from the empirical length distribution. Specifically, we sample 2048 sequences with length sampled from the length distribution of PDB and SwissProt datasets. Tab. 12 demonstrates that DPLM-2 can generate highly plausible proteins from the empirical length distribution, which is consistent with sampling with length intervals. F.3 REPRESENTATION LEARNING EVALUATION WITH MORE BASELINES We have added more recent strong baselines, such as GNN-based methods (e.g., GearNet), in addition to Tab. 5 to make a more comprehensive comparison on the representation learning tasks, as shown in Tab. 13. This demonstrates that DPLM-2 is capable of utilizing both protein structure and sequence to generate more informative representations for series of downstream tasks. F.4 INVERSE FOLDING EVALUATION WITH MORE BASELINES For inverse folding task, we mainly focus on the comparison with other multimodal generative models (MultiFlow, ESM3) in the Tab. 4. We have also added more recognized baseline methods in inverse folding evaluation, as shown in Tab. 14. 24 Published as a conference paper at ICLR 2025 Table 13: Representation learning performance on various protein predictive tasks, comparing between DPLM-2 and more recent strong baselines. means results are quoted from SaProt paper, while means results are quoted from their respective paper. † ∗ Models Thermostability HumanPPI Metal Ion Binding EC GO BP CC MF DeepLoc Subcellular Acc (%) Binary Acc (%) Spearman’s ρ Acc (%) Acc (%) Fmax Fmax Fmax Fmax †SaProt (650M) †SaProt-GearNet (650M) †MIF-ST (Yang et al., 2022b) †GearNet (Zhang et al., 2023) ∗GearNet updated (Zhang et al., 2023) ∗CoupleNet [1] ∗CDConv [2] ∗ESM2-650M-S [3] ∗VABS-NET [4] ∗ESM-GearNet-INR-MC [5] ESM2 (650M) DPLM (650M) DPLM-2 (650M) 0.724 0.660 0.694 0.571 – – – – – – 0.691 0.695 0.714 86.41 85.80 75.54 73.86 – – – – – – 84.78 86.41 84.44 75.75 74.44 75.08 71.26 – – – – – – 71.88 75.15 74.28 0.882 0.889 0.807 0.874 0.890 0.866 0.820 0.823 0.900 0.896 0.868 0.875 0.682 0.678 0.633 0.644 0.681 0.669 0.654 0.649 0.695 0.683 0.670 0.680 0.486 0.522 0.375 0.481 0.488 0.467 0.453 0.463 0.531 0.518 0.473 0.480 0.479 0.508 0.322 0.476 0.464 0.494 0.479 0.519 0.579 0.504 0.470 0.478 0.881 0.682 0.493 0.481 85.57 84.16 78.96 69.45 – – – – – – 83.68 84.56 82.98 93.55 93.63 91.76 89.18 – – – – – – 92.28 93.09 93.64 We conduct experiments on CATH 4.2 testset. We ob- serve that DPLM-2 is able to achieve close results with the strong baselines despite slightly lower scTM. To fur- ther improve scTM to bridge the last gap, there are sev- eral potential directions: (1) inverse folding SFT training: DPLM-2 conducts this task in a zero-shot manner while other systems are purpose-built models, thus task-oriented SFT training could help as we have observed in folding; (2) better structure modeling includes introducing separate structure encoders for structure encoding and generation purposes, or hybrid tokenization for recovering the lost fine-grain structural variations, as discussed in the §B.4. Table 14: Inverse folding performance comparison between DPLM-2 and other baselines on the CATH 4.2 testset. † means results are quoted from their re- spective paper. Model scTM AAR †Knowledge-Design (Gao et al., 2023) †GraDe-IF (Yi et al., 2023) †MMDesign (Zheng & Li, 2024) †VFN-IFE (Mao et al., 2023) PiFold (Gao et al., 2022) †Bridge-IF (Zhu et al., 2024) ProteinMPNN (Dauparas et al., 2022) LM-Design (Zheng et al., 2023b) DPLM-2 w/ argmax decoding DPLM-2 w/ temperature-annealed sampling 60.77 52.21 54.88 62.67 51.66 58.59 45.96 54.41 42.70 36.30 – – – – – – 0.87 0.88 0.84 0.84 F.5 MOTIF SCAFFOLDING Evaluation Pipeline. We evaluate DPLM-2 in sequence-based, structure-based and co-generation ways. The overall illustration is shown in Fig. 8. We focus on the two aspects: overall quality and motif part consistency. The assessment of overall quality varies across different approaches. Specifically, (1) For sequence-based method, we only take the generated sequence and utilize ESMFold to obtain the predicted structure, and the pLDDT score provided by ESMFold is used to assess overall quality. (2) For structure-based method, we only take the generated structure, and then leverage ProteinMPNN to predict the sequence, followed by ESMFold to predict the structure, where overall quality is assessed by scTM. (3) For co-generation method, we take both the generated structure and sequence, and predict structure given generated sequence with ESMFold, where scTM is calculated between generated structure and ESMFold predicted structure to evaluate overall quality. Considering that the ground truth motif structure is given, we only utilize the ESMFold predicted structure to calculate motif-RMSD. Result of Each Problem. Tab. 15 presents the result of each motif-scaffolding problem. DPLM- 2 achieves the best average success rate in each evaluation. Compared with ESM3, DPLM-2 shows better results in 12 problems in co-generation evaluation and 10 problems in sequence- based evaluation. Meanwhile, DPLM-2 outperforms RFDiffusion in 14 problems in structure-based evaluation. This demonstrates that DPLM-2 can achieve strong performance under various evaluation methods. We also find that taking the best result from 8 samples can bring significant improvement compared to 1 sample, especially in terms of success rate. In the co-generation evaluation, DPLM2 with sampling 8 times improves the success rate of most of the problems by a large margin. We hypothesize that sampling eight times largely alleviates errors caused by randomness in the sampling process, thereby producing a more suitable scaffold for the given motif. G DISCUSSION ON THE CONDITIONAL INDEPENDENCE ASSUMPTION In the Eq. 2, we make a conditional independence assumption between the protein structure and sequence. However, conditional independence is not a special assumption made by DPLM-2, it is a fundamental assumption made by diffusion models in general and their multimodal extensions, derived from the nature of their forward and backward processes. Previous theoretical studies on 25 Published as a conference paper at ICLR 2025 Figure 8: Table 15: Motif-scaffolding results of each problem. * means best result from 8 samples. ', 'original_lines': ' 16 Under review as a conference paper at ICLR 2025 Figure 6: A.6 HYPERPARAMETER We train all models using AdamW optimizer (Kingma & Ba, 2015) with β1 = 0.9 and β2 = 0.95. We use a weight decay of 0.01 and gradient clipping of 0.5. We employ 2K warmup steps until reaching the maximum learning rate, and utilize a linear decay scheduler to decay LR to 10% of the maximum learning rate by the end of training. The maximum learning rate is 1e-4, and the overall training step is 100,000. We utilize the pretrained DPLM as the parameter initialization, and the diffusion timestep is set to 500. We train 150M DPLM-2 with 8 A100 GPUs for 3 days, while 650M with 16 A100 GPUs for 3 days and 3B with 16 A100 GPUs for a week. B MOTIF SCAFFOLDING B.1 EVALUATION PIPELINE We evaluate DPLM-2 in sequence-based, structure-based and co-generation ways. The overall illustration is shown in Fig. 6. We focus on the two aspects: overall quality and motif part consistency. The assessment of overall quality varies across different approaches. Specifically, (1) For sequence-based method, we only take the generated sequence and utilize ESMFold to obtain the predicted structure, and the pLDDT score provided by ESMFold is used to assess overall quality. (2) For structure-based method, we only take the generated structure, and then leverage ProteinMPNN to predict the sequence, followed by ESMFold to predict the structure, where overall quality is assessed by scTM. (3) For co-generation method, we take both the generated structure and sequence, and predict structure given generated sequence with ESMFold, where scTM is calculated between generated structure and ESMFold predicted structure to evaluate overall quality. Considering that the ground truth motif structure is given, we only utilize the ESMFold predicted structure to calculate motif-RMSD. B.2 RESULT OF EACH PROBLEM Tab. 9 presents the result of each motif-scaffolding problem. DPLM-2 achieves the best average success rate in each evaluation. Compared with ESM3, DPLM-2 shows better results in 12 problems in co-generation evaluation and 10 problems in sequence-based evaluation. Meanwhile, DPLM-2 outperforms RFDiffusion in 14 problems in structure-based evaluation. This demonstrates that DPLM-2 can achieve strong performance under various evaluation methods. We also find that taking the best result from 8 samples can bring significant improvement compared to 1 sample, especially in terms of success rate. In the co-generation evaluation, DPLM2 with sampling 8 times improves the success rate of most of the problems by a large margin. We hypothesize that sampling eight times largely alleviates errors caused by randomness in the sampling process, thereby producing a more suitable scaffold for the given motif. C RELATED WORK C.1 PROTEIN LANGUAGE MODELS There is growing interest in developing protein LMs at the scale of evolution, such as the series of ESM (Rives et al., 2019; Lin et al., 2022), TAPE (Rao et al., 2019), ProtTrans (Elnaggar et al., 2021), PRoBERTa (Nambiar et al., 2020), PMLM (He et al., 2021), ProteinLM (Xiao et al., 2021), 17 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 *RFDiffESM3*DPLM2DPLMEvoDiffseqpred: ✓structpred: !motif-preservingdesignabilityRMSD(ESMFold(seqpred)[motif],structnative[motif])<1.0predictionpLDDT(ESMFold(seqpred))>70seqpred: !structpred: ✓motif-preservingdesignabilityRMSD(ESMFold(PMPNN(structpred))[motif],structnative[motif])<1.0predictionTMScore(ESMFold(PMPNN(structpred)), structpred)>0.8seqpred: ✓structpred: ✓motif-preservingdesignabilityRMSD(ESMFold(seqpred)[motif],structnative[motif])<1.0predictionTMScore(ESMFold(seqpred), structpred)>0.8DPLM2ESM3DPLM2*DPLM2sequence-basedstructure-basedco-generation* means best of 8 samplessequence-basedstructure-basedco-generation Under review as a conference paper at ICLR 2025 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 Table 9: Motif-scaffolding results of each problem. * means best result from 8 samples. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 124}, {'section': 'Abstract', 'after_section': None, 'context_after': 'PLUS (Min et al., 2021), Adversarial Masked LMs (McDermott et al., 2021), ProteinBERT (Brandes et al., 2022), CARP (Yang et al., 2022a) in masked language modeling (MLM) paradigm, Prot- GPT2 (Ferruz et al., 2022) in causal language modeling paradigm, and several others (Melnyk et al., ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '19/24 0.53 ', 'modified_lines': 'diffusion models have shown the convergence between generated samples distribution and data distribution is guaranteed under such conditional independence. In this paper, we have empirical evidence showing the consistency/compatibility between co-generated structures and sequences (e.g., scTM for co-generation), and we believe a mathematical proof of this is beyond the scope of this paper and can refer to the established theoretical results on diffusion. Nevertheless, we do love to elaborate on our thoughts and understanding of this as follows. Conditional independence in diffusion models in general. Conditional independence over the ele- xt), is a prevailing assumption ments of high-dimensional data, i.e., pθ(xt in diffusion probablistic models, both continuous and discrete variants, thanks to their iterative nature of probabilistic modeling. For example, in continuous diffusion models for vision generation, the denoising networks learn to reconstruct a denoised image at each timestep t 1 by simultaneously and independently operating over all pixels conditioned on the previous noisier pixels of the image and the current timestep (or equivalently noise level) t. So as for discrete diffusion, where discrete diffusion for text or protein sequence treats tokens of a sequence of xt 1 independently given xt. Several recent works have established the theoretical foundations on the convergence analysis of both continuous diffusion (Chen et al., 2023) and discrete diffusion (Chen & Ying, 2024; Zhang et al., 2024), showing that there are theoretical guarantees of the convergence of the generated d i=1 pθ(xt xt) = 1,[i]| 1| (cid:81) − − − − 26 *RFDiffESM3*DPLM2DPLMEvoDiffseqpred: ✓structpred: !motif-preservingdesignabilityRMSD(ESMFold(seqpred)[motif],structnative[motif])<1.0predictionpLDDT(ESMFold(seqpred))>70seqpred: !structpred: ✓motif-preservingdesignabilityRMSD(ESMFold(PMPNN(structpred))[motif],structnative[motif])<1.0predictionTMScore(ESMFold(PMPNN(structpred)), structpred)>0.8seqpred: ✓structpred: ✓motif-preservingdesignabilityRMSD(ESMFold(seqpred)[motif],structnative[motif])<1.0predictionTMScore(ESMFold(seqpred), structpred)>0.8DPLM2ESM3DPLM2*DPLM2sequence-basedstructure-basedco-generation* means best of 8 samplessequence-basedstructure-basedco-generation Published as a conference paper at ICLR 2025 − − − − − 1) (cid:81) 1, yt 1,[i]| 1,[j]| xt, yt j pθ(yt x1, ..., xd} { of the generated samples). independence between modalities is generally made pθ(xt sample distribution of the diffusion models and the data distribution, which means that a well-learned diffusion models can preserve the statistical structure of the data (in other words, the consistency between the elements x = Conditional independence in multimodal diffusion models. Multimodal diffusion mod- In this case, els aim to accommodate two or more modalities using a unified models. xt, yt) = conditional 1| xt, yt). For instance, UniDiffuser (Bao et al., 2023) is a i pθ(xt multimodal continuous diffusion model that handles text and image modalities independently at each (cid:81) timestep, conditioned on the predictions from the previous timestep. Multiflow (Campbell et al., 2024), on the other hand, factorizes protein data into three modalities—translation, orientation, and amino acid type—assuming conditional independence. It establishes a multimodal diffusion/flow- based model by combining three types of stochastic processes over Euclidean, SO(3), and categorical spaces for these modalities. In DPLM-2, we adopt a unified discrete diffusion approach where structure tokens and amino acid tokens are treated as conditionally independent. While theoretical guarantees for the convergence of mixture diffusion processes are still under-explored, existing discrete diffusion theory (Chen & Ying, 2024) ensures that a well-trained DPLM-2 can converge to the tokenized structure-sequence data distribution, supporting consistency between structure and sequence tokens. Additionally, theoretical studies on non-autoregressive Transformers for text generation, which are akin to masked discrete diffusion, indicate that the learning difficulty of such models can be evaluated through conditional total correlation. This dataset-dependent measure captures the discrepancy between a joint data distribution and a fully factorized distribution under conditional independence (Huang et al., 2022). These studies suggest that simplifying the target data, for instance, by using tokenized structure instead of 3D coordinates, reduces conditional total correlation, thereby enhancing both learning and generation quality. Given the consistency of structure tokens and amino acid can be ensured to learn in DPLM-2 by previous theoretical results, the overall structure and sequence consistency can be achieved with a decent structure tokenizer, such as the one proposed in this paper, which accurately maps structure tokens to their atomic coordinates. H RELATED WORK H.1 PROTEIN LANGUAGE MODELS There is growing interest in developing protein LMs at the scale of evolution, such as the series of ESM (Rives et al., 2019; Lin et al., 2022), TAPE (Rao et al., 2019), ProtTrans (Elnaggar et al., 2021), PRoBERTa (Nambiar et al., 2020), PMLM (He et al., 2021), ProteinLM (Xiao et al., 2021), ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 −v(2)t−', 'after_section': '1 −v(2)t−', 'context_after': 'competency in designing protein structure despite being exclusively trained on sequences. Diffusion models have become popular tools in structural biology for protein generation, and their utility has been demonstrated across a range of generative tasks in recent years. Trippe et al. (2022), along with others, have introduced several diffusion model variants, each with its unique ', 'paragraph_idx': 164, 'before_section': '1 −v(2)t−', 'context_before': 'on function (Meier et al., 2021), antibody infilling (Melnyk et al., 2022a) and many other general purposes (Rives et al., 2019). Simultaneously, Verkuil et al. (2022) demonstrate that the large scale protein LMs can generate de novo proteins by generalizing beyond natural proteins, both theoretically ', 'modified_lines': 'and experimentally validating their hypothesis in exhaustive detail, in which protein LMs demonstrate H.2 PROTEIN STRUCTURE GENERATIVE MODELS ', 'original_lines': 'and experimentally validating their hypothesis in exhaustive detail, in which pLMs demonstrate C.2 PROTEIN STRUCTURE GENERATIVE MODELS ', 'after_paragraph_idx': 164, 'before_paragraph_idx': 164}]
2025-02-14 02:56:45
ICLR.cc/2025/Conference
Mmva5kEdPa
iMn64eWhVN
[]
2025-02-14 05:47:53
ICLR.cc/2025/Conference
iMn64eWhVN
092wPIgWU2
[{'section': '2 PRELIMINARIES', 'after_section': None, 'context_after': '∼ ∼ ', 'paragraph_idx': 11, 'before_section': '2 PRELIMINARIES', 'context_before': 'x(t ', 'modified_lines': ' − ', 'original_lines': '− ', 'after_paragraph_idx': None, 'before_paragraph_idx': 11}, {'section': '2 PRELIMINARIES', 'after_section': None, 'context_after': '≤ 11 ', 'paragraph_idx': 11, 'before_section': '2 PRELIMINARIES', 'context_before': '− ', 'modified_lines': ' − ', 'original_lines': '− ', 'after_paragraph_idx': None, 'before_paragraph_idx': 11}, {'section': 'Abstract', 'after_section': None, 'context_after': 'To further analyze the properties of different model, we examine their secondary structure distribution against natural proteins from PDB. Proteins sampled by DPLM-2 have secondary structures most similar to natural proteins. As ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'of ablation study, please refer to §A.6. 4.1.2 DPLM-2 GENERATES PROTEINS THAT RESEMBLES NATURAL PROTEINS ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'after_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_after': 'lddt ca ', 'paragraph_idx': 128, 'before_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_before': 'training set ', 'modified_lines': 'valid set (CAMEO 2022) ', 'original_lines': 'valid set (Cameo 2022) ', 'after_paragraph_idx': 128, 'before_paragraph_idx': 128}, {'section': 'Abstract', 'after_section': None, 'context_after': '21 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'decoding strategy picks the token with highest probability at each timestep, yielding sequence with high probability and resulting in high amino acid recovery (AAR). On the other hand, we employ a sampling strategy with annealing temperature from 2.2 to 0.1 to improve diversity, and the generated ', 'modified_lines': '', 'original_lines': 'sequence has a lower AAR while maintaining the same scTM as argmax decoding. This demonstrates that temperature annealing sampling strategy is capable of generating more diverse sequences that, while not similar to the ground truth, still meet the given structural conditions. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'after_section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'context_after': 'Algorithm 1 Temperature-annealed stochastic sampling Input: trained network fθ ( ', 'paragraph_idx': 143, 'before_section': None, 'context_before': '0.88/0.93 0.88/0.93 ', 'modified_lines': 'sequence has a lower AAR while maintaining the same scTM as argmax decoding. This demonstrates that temperature annealing sampling strategy is capable of generating more diverse sequences that, while not similar to the ground truth, still meet the given structural conditions. ', 'original_lines': '', 'after_paragraph_idx': 143, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '22 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Then we delve into the insight and purpose of the TM-score and RMSD metrics. TM-score emphasizes global topology, while RMSD is sensitive to local structural errors. As such, although scTM and scRMSD are generally correlated, discrepancies can arise. The purpose of TM-score is to solve this ', 'modified_lines': '', 'original_lines': 'sensitivity of RMSD, because RMSD is an average distance of all residue pairs in two structures, a local error (e.g. a misorientation of the tail) will raise a big RMSD value although the global topology is correct. In TM-score, however, the small distance is weighted stronger than the big distance, which makes the score insensitive to the local modeling error. As shown in Fig. 4B, some samples ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 −v(2)t−', 'after_section': '1 −v(2)t−', 'context_after': 'from DPLM-2 with higher loop proportion are more conformationally flexible, hence may show high scTM (>0.9) but worse scRMSD (>2.0), similar to natural protein. However, this does not necessarily indicate a limitation in generation quality but reflects differences in metric sensitivity. ', 'paragraph_idx': 146, 'before_section': None, 'context_before': '82.98 83.42 ', 'modified_lines': 'sensitivity of RMSD, because RMSD is an average distance of all residue pairs in two structures, a local error (e.g. a misorientation of the tail) will raise a big RMSD value although the global topology is correct. In TM-score, however, the small distance is weighted stronger than the big distance, which makes the score insensitive to the local modeling error. As shown in Fig. 4B, some samples ', 'original_lines': '', 'after_paragraph_idx': 146, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '23 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'In addition to the Tab. 2, we have conducted more comprehensive evaluations on the unconditional generation in terms of protein sequence, including: (1) sequence and structural diversity: we conduct MMseqs2 clustering and Foldseek structural clustering at different thresholds. For MMseqs2 ', 'modified_lines': '', 'original_lines': 'clustering, we cluster samples with pLDDT > 70, while for foldseek clustering we cluster samples with scTM > 0.5. This quality threshold for diversity is inspired by MultiFlow, which is more informative by avoiding diverse but messy sequences. Then, we divide the number of clusters by the total number of samples to measure the diversity. (2) sequence naturalness: we calculate perplexity ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 DISCUSSIONSIn this paper, we introduce DPLM-2, a multimodal diffusion protein language model that understands,generates and reasons over protein structure and sequence, aiming to severe as a mulimodal foundationfor protein. Despite promising performance spanning protein co-generation, folding, inverse foldingand conditional motif-scaffolding with mulimodal input and output, there remains several limitationsdeserving to be addressed. (1) Structure data: Our findings indicate that while structure awarenessmay help with predictive tasks, the limited structure data constrains DPLM-2’s ability to learn robustrepresentations. It is also important to account for longer protein chains and multimers in futurestudies. (2) Trade-off of discrete latent representation: Tokenizing structure into discrete symbolsfacilitates multimodal protein language models and co-generation but may come at the cost of losingfine-grained structural details and control, such as precise atomic positions and inter-atomic distances.Future work should aim to also integrate the strengths of data-space structure-based generative modelsinto sequence-based multimodal language models to maximize the best of both worlds.', 'after_section': None, 'context_after': 'as a measure of naturalness with ProGen2-large (Nijkamp et al., 2022). (3) sequence novelty: we calculate novelty through sequence identity to the nearest neighbor in the training set. All models generate 100 samples per length in the range of 100, 200, 300, 400 and 500 for evaluation, ', 'paragraph_idx': 129, 'before_section': None, 'context_before': '82.686 83.698 ', 'modified_lines': 'clustering, we cluster samples with pLDDT > 70, while for foldseek clustering we cluster samples with scTM > 0.5. This quality threshold for diversity is inspired by MultiFlow, which is more informative by avoiding diverse but messy sequences. Then, we divide the number of clusters by the total number of samples to measure the diversity. (2) sequence naturalness: we calculate perplexity ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '24 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'to Tab. 5 to make a more comprehensive comparison on the representation learning tasks, as shown in Tab. 13. This demonstrates that DPLM-2 is capable of utilizing both protein structure and sequence to generate more informative representations for series of downstream tasks. ', 'modified_lines': '', 'original_lines': ' F.4 INVERSE FOLDING EVALUATION WITH MORE BASELINES For inverse folding task, we mainly focus on the comparison with other multimodal generative models (MultiFlow, ESM3) in the Tab. 4. We have also added more recognized baseline methods in inverse folding evaluation, as shown in Tab. 14. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '†Knowledge-Design (Gao et al., 2023) †GraDe-IF (Yi et al., 2023) †MMDesign (Zheng & Li, 2024) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'spective paper. Model ', 'modified_lines': '', 'original_lines': 'scTM AAR ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '25 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'with sampling 8 times improves the success rate of most of the problems by a large margin. We hypothesize that sampling eight times largely alleviates errors caused by randomness in the sampling process, thereby producing a more suitable scaffold for the given motif. ', 'modified_lines': '', 'original_lines': ' G DISCUSSION ON THE CONDITIONAL INDEPENDENCE ASSUMPTION In the Eq. 2, we make a conditional independence assumption between the protein structure and sequence. However, conditional independence is not a special assumption made by DPLM-2, it is a fundamental assumption made by diffusion models in general and their multimodal extensions, derived from the nature of their forward and backward processes. Previous theoretical studies on ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 −v(2)t−', 'after_section': '1 −v(2)t−', 'context_after': 'diffusion models have shown the convergence between generated samples distribution and data distribution is guaranteed under such conditional independence. In this paper, we have empirical evidence showing the consistency/compatibility between co-generated structures and sequences (e.g., ', 'paragraph_idx': 160, 'before_section': None, 'context_before': '19/24 0.53 ', 'modified_lines': 'G DISCUSSION ON THE CONDITIONAL INDEPENDENCE ASSUMPTION In the Eq. 2, we make a conditional independence assumption between the protein structure and sequence. However, conditional independence is not a special assumption made by DPLM-2, it is a fundamental assumption made by diffusion models in general and their multimodal extensions, derived from the nature of their forward and backward processes. Previous theoretical studies on ', 'original_lines': '', 'after_paragraph_idx': 160, 'before_paragraph_idx': None}]
2025-02-14 15:26:50
ICLR.cc/2025/Conference
JSFSCQ1kZ1
kXOUOoZBbZ
[{'section': '2.1 PROBLEM FORMULATION', 'after_section': None, 'context_after': 'distance of our reference embeddings {xref,i}nid imates the manifold of embeddings X . To this end, for each reference embedding {xref,i}nid find the closest embedding in {xi}ngallery in Eq. 1 as a regularized min-max optimization as follows: i=1 , which approx- i=1, we and minimize their distance. We can write the optimization i=1 to the set of embeddings {xi}ngallery i=1 ', 'paragraph_idx': 9, 'before_section': None, 'context_before': ': embeddings of a gallery of face images, ', 'modified_lines': 'of embeddings X . To this end, we consider a set of face images {Ii}ngallery extract their embeddings to have set of valid embeddings {xi}ngallery as a gallery of images2 and i=1 . Then, we try to minimize the i=1 ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.2 HYPERFACE SYNTHETIC FACE DATASET', 'after_section': None, 'context_after': '4 Table 1: Comparison of recognition performance of face recognition models trained with different synthetic datasets and a real dataset (i.e., CASIA-WebFace). The performance reported for each ', 'paragraph_idx': 19, 'before_section': '2.2 HYPERFACE SYNTHETIC FACE DATASET', 'context_before': '(3) ', 'modified_lines': '2The gallery of face images {Ii} ngallery i=1 can be generated using an unconditional face generator network such as StyleGAN (Karras et al., 2020), Latent Diffusion Model (LDM) (Rombach et al., 2022), etc. Published as a conference paper at ICLR 2025 ', 'original_lines': 'where β is a hyperparamter that controls the variations to the reference embedding. Figure 2 depicts the block diagram of our synthetic dataset generation process. Algorithm 3 in Appendix F also present a pseudo-code of dataset generation process. 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 Under review as a conference paper at ICLR 2025 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 19}, {'section': '2.2 HYPERFACE SYNTHETIC FACE DATASET', 'after_section': None, 'context_after': '3 EXPERIMENTS 3.1 EXPERIMENTAL SETUP ', 'paragraph_idx': 21, 'before_section': None, 'context_before': '94.32 ', 'modified_lines': 'where β is a hyperparamter that controls the variations to the reference embedding. Figure 2 depicts the block diagram of our synthetic dataset generation process. Algorithm 3 in Appendix F also present a pseudo-code of dataset generation process. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.1 EXPERIMENTAL SETUP', 'after_section': None, 'context_after': '3.2 ANALYSIS Comparison with Previous Synthetic Datasets: We compare the recognition performance of face recognition models trained with our synthetic dataset and previous synthetic datasets in the literature. We use the published dataset for each method and train all models with the same con- figuration for different datasets to prevent the effect of other hyperparameters (such as number of epochs, batch size, etc.). For a fair comparison, we consider the versions of datasets with a similar ', 'paragraph_idx': 27, 'before_section': '3.1 EXPERIMENTAL SETUP', 'context_before': 'using 10-fold cross-validation for each of benchmarking datasets. The source code of our experi- ments and generated datasets are publicly available3. ', 'modified_lines': '3Project page: https://www.idiap.ch/paper/hyperface 5 Published as a conference paper at ICLR 2025 ', 'original_lines': ' 3The source code and generated datasets will be available upon acceptance of the paper. 5 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 26}, {'section': '3.2 ANALYSIS', 'after_section': None, 'context_after': 'in- creasing the size of gallery improves the performance of the trained model. However, with 10,000 images we can still approximate the manifold of face embeddings on the hy- ', 'paragraph_idx': 28, 'before_section': '3.2 ANALYSIS', 'context_before': 'tion model trained with datasets with 10k identity and optimized with dif- ferent numbers of gallery images. As ', 'modified_lines': 'the results in this table shows, ', 'original_lines': 'the results in this table shows, ', 'after_paragraph_idx': None, 'before_paragraph_idx': 28}, {'section': '3.2 ANALYSIS', 'after_section': None, 'context_after': '4Only in the dataset used for DigiFace (Bae et al., 2023) there are more identities, because there is only one version available for this dataset, which has a greater number of identities compared to other existing synthetic ', 'paragraph_idx': 31, 'before_section': '3.2 ANALYSIS', 'context_before': 'ngallery ', 'modified_lines': 'As another ablation study, we use different source of images for the gallery set to use in our regularization and solve the HyperFace optimization. We use pretrained StyleGAN (Karras et al., 2020) as a GAN-based generator model and a pretrained latent diffusion model (Rom- bach et al., 2022) as a diffusion-based generator model. We use these generator models and In addition, for our ablation study, we con- randomly generate some synthetic face images. sider some real images from BUPT dataset (Wang et al., 2019) as a dataset of real face images. ', 'original_lines': 'Table 5: Ablation study on the type of data in gallery As another ablation study, we use different source of images for the gallery set to use in our regulariza- tion and solve the HyperFace opti- mization. We use pretrained Style- GAN (Karras et al., 2020) as a GAN- based generator model and a pre- trained latent diffusion model (Rom- bach et al., 2022) as a diffusion-based generator model. We use these generator models and randomly generate some synthetic face im- LFW CPLFW CALFW CFP StyleGAN 98.67 AgeDB Gallery BUPT 89.09 89.14 84.35 87.07 84.68 86.35 89.82 89.17 98.65 87.13 84.77 98.70 89.16 90.03 LDM ', 'after_paragraph_idx': None, 'before_paragraph_idx': 31}, {'section': '3.3 DISCUSSION', 'after_section': '3.3 DISCUSSION', 'context_after': 'hypersphere, which can reduce the complexity in each iteration (i.e., O(b2), where b is the size of batch and b ≤ nid). We further discuss the complexity of our optimization and dataset generation in Appendix A and present further analyses for stochastic optimization, that reduces the complexity of ', 'paragraph_idx': 52, 'before_section': None, 'context_before': 'synthesized dataset and training dataset of StyleGAN, which was used to generate random images for initialization and regularization in the HyperFace optimization. ', 'modified_lines': 'resources. Meanwhile, most existing synthetic datasets in the literature have a comparable num- ber of identities to our experiments. We should note that in our optimization, we considered all points in each iteration of optimization which introduces quadratic complexity to our optimization. However, we can solve the optimization with stochastic mini-batches of points on the embedding ', 'original_lines': '', 'after_paragraph_idx': 52, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Adversarial Networks (GANs) or probabilistic Diffusion Models (PDMs) to generate synthetic face datasets. Qiu et al. (2021) proposed SynFace and utilised DiscoFaceGAN (Deng et al., 2020) to gen- erate their dataset. They generated different synthetic identities using identity mixup by exploring ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'graphic pipeline to render different identities and also generate different images for each identity by introducing different variations based on face attributes (e.g., variation in facial pose, acces- sories, and textures). In contrast to (Bae et al., 2023) , other papers in the literature used Generative ', 'modified_lines': '', 'original_lines': ' 8 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 Under review as a conference paper at ICLR 2025 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'nition. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3526–3535, 2023. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Gwangbin Bae, Martin de La Gorce, Tadas Baltruˇsaitis, Charlie Hewitt, Dong Chen, Julien Valentin, Roberto Cipolla, and Jingjing Shen. Digiface-1m: 1 million digital face images for face recog- ', 'modified_lines': '', 'original_lines': ' 9 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyz- ing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'adversarial networks. Pattern Recognition (CVPR), pp. 4401–4410, 2019. ', 'modified_lines': '', 'original_lines': ' 10 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 Under review as a conference paper at ICLR 2025 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Hatef Otroshi Shahreza, Christophe Ecabert, Anjith George, Alexander Unnervik, S´ebastien Marcel, Nicol`o Di Domenico, Guido Borghi, Davide Maltoni, Fadi Boutros, Julia Vogel, et al. Sdfr: Synthetic data for face recognition competition. In 2024 IEEE 18th International Conference on ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'David W Jacobs. Frontal to profile face verification in the wild. In 2016 IEEE winter confer- ence on applications of computer vision (WACV), pp. 1–9. IEEE, 2016. ', 'modified_lines': '', 'original_lines': '11 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-02-06 22:34:56
ICLR.cc/2025/Conference
62SLbdHhoW
27wjNiGVJr
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'efited “first-class” languages such as English and Chinese, leaving many other languages underrepresented. This imbalance, while limiting broader applications, generates a natural preference ranking between languages, offering an opportu- ', 'modified_lines': 'nity to bootstrap the multilingual capabilities of LLM in a self-improving manner. Thus, we propose Language Imbalance Driven Rewarding, where the inherent imbalance between dominant and non-dominant languages within LLMs is lever- aged as a reward signal. Iterative DPO training demonstrates that this approach not only enhances LLM performance in non-dominant languages but also improves the dominant language’s capacity, thereby yielding an iterative reward signal. Fine-tuning Meta-Llama-3-8B-Instruct over two iterations of this approach re- sults in continuous improvements in multilingual performance across instruction- following and arithmetic reasoning tasks, evidenced by an average improvement of 7.46% win rate on the X-AlpacaEval leaderboard and 13.9% accuracy on the MGSM benchmark. This work serves as an initial exploration, paving the way for multilingual self-improvement of LLMs. The code is available at https: //github.com/ZNLP/Language-Imbalance-Driven-Rewarding ', 'original_lines': 'nity to bootstrap the multilingual capabilities of LLM in a self-improving man- ner. Thus, we propose Language Imbalance Driven Rewarding, where the in- herent imbalance between dominant and non-dominant languages within LLMs is leveraged as a reward signal. Iterative DPO training demonstrates that this approach not only enhances LLM performance in non-dominant languages but also improves the dominant language’s capacity, thereby yielding an iterative re- ward signal. Fine-tuning Meta-Llama-3-8B-Instruct over two iterations of this approach results in continuous improvements in multilingual performance across instruction-following and arithmetic reasoning tasks, evidenced by an average im- provement of 7.46% win rate on the X-AlpacaEval leaderboard and 13.9% accu- racy on the MGSM benchmark. This work serves as an initial exploration, paving the way for multilingual self-improvement of LLMs. 1 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '∗ Equal contribution † Corresponding author ', 'modified_lines': '', 'original_lines': '1 Code: https://github.com/ZNLP/Language-Imbalance-Driven-Rewarding ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'multilingual preference optimization on the D1. 3 DISCUSSION ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'struction prompts X , training a series of models M1, M2, ..., MT . The models and corresponding training data used are defined as follows: (1) M0: Base LLM; Instruction-following model. (2) M1: Initialized with M0, using M0 and X to generate D0, then conduct multilingual preference ', 'modified_lines': 'optimization on the D0. (3) M2: Initialized with M1, using M1 and X to generate D1, then conduct ', 'original_lines': 'optimization on the D0. (3)M2: Initialized with M1, using M1 and X to generate D1, then conduct ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 4}, {'section': '4.1 EXPERIMENTAL SETUP', 'after_section': None, 'context_after': 'provided by Okapi (Lai et al., 2023), which were translated from the original benchmarks using ChatGPT, and conducted evaluations under the lm-evaluation-harness (Gao et al., 2024) framework. ', 'paragraph_idx': 37, 'before_section': '4.1 EXPERIMENTAL SETUP', 'context_before': 'Multilingual NLP Benchmarks We examine the changes in world knowledge and commonsense reasoning abilities throughout the iterative process by evaluating it on the multilingual versions of the ', 'modified_lines': 'MMLU (Hendrycks et al., 2020)3, HellaSwag (Zellers et al., 2019)4, ARC Challenge (Clark et al., 2018)5 and TruthfulQA (Lin et al., 2021)6 benchmarks. We utilized the multilingual benchmarks ', 'original_lines': 'MMLU (Hendrycks et al., 2020)4, HellaSwag (Zellers et al., 2019)5, ARC Challenge (Clark et al., 2018)6 and TruthfulQA (Lin et al., 2021)7 benchmarks. We utilized the multilingual benchmarks ', 'after_paragraph_idx': None, 'before_paragraph_idx': 37}]
2025-02-26 13:20:59
ICLR.cc/2025/Conference
U6V0JsPc4u
1VGLRhk02A
[{'section': 'Abstract', 'after_section': None, 'context_after': '0.064 ms 0.128 ms 0.212 ms 0.211 ms 0.938 ms 0.704 ms 4096×4096 8192×8192 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'PyTorch’s optimization of MLP. However, as the input and output sizes continue to increase, matrix computations become the main contributor to runtime. At this point, FAN’s fewer parameters and reduced FLOPs begin to show significant advantages. Note that FAN can be further optimized from ', 'modified_lines': 'the underlying implementation. 0.114 ms 0.133 ms 2048×2048 ', 'original_lines': 'the underlying implementation, we leave this to future research. 0.114 ms 0.133 ms 2048×2048 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'perspective, periodicity is not just a data feature but reflects a form of structural knowledge—one that allows for the transfer and reuse of abstract rules and principles across different contexts. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Second, beyond tasks that explicitly require periodicity modeling, FAN also has utility in a broader range of applications, which has been evidenced by our extensive experiments on real-world tasks, such as symbolic formula representation, time series forecasting, language modeling, and image ', 'modified_lines': 'recognition, where FAN achieve competitive or superior performance than MLP and other base- lines. In fact, many machine learning tasks may harbor hidden forms of periodicity, even without explicitly including periodicity, such as mathematical operations and logic reasoning. If the neural network lacks the ability to model periodicity, it could impair its learning efficiency. From a deeper ', 'original_lines': 'recognition, where FAN achieve competitive or superior performance than MLP and other baselines. In fact, many machine learning tasks may harbor hidden forms of periodicity, even without explicitly including periodicity, such as mathematical operations and logic reasoning. If the neural network lacks the ability to model periodic components, it could impair its learning efficiency. From a deeper ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-28 12:18:18
ICLR.cc/2025/Conference
x2MNJbU88t
F8KfdS63Kf
[]
2025-02-17 05:30:19
ICLR.cc/2025/Conference
F8KfdS63Kf
fGreaAZvok
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'In each iteration of the evalua- tion process, we first establish sampling probability matrices under different optimization objectives respectively based on real-time preference results. Afterwards, we integrate these matrices to obtain a global sampling probability matrix. Furthermore, we explore various tuple sampling strategies and preference aggregation methods to achieve optimal evaluation results. 2 RELATED WORK ', 'paragraph_idx': 10, 'before_section': None, 'context_before': 'the uniform allocation among models, which helps reduce updating uncertainty. Based on these insights, we propose UNICBE, a unified uniformity-driven framework that can ', 'modified_lines': 'achieve CBE with better accuracy, convergence and scalability. To comprehensively validate the effectiveness and generalizability of UNICBE, we conduct exper- iments involving various types of judges (LLMs and humans), different benchmarks, varied model sets to be evaluated, diverse scenarios (static and dynamic), and multiple evaluation metrics. The main results indicate that, compared to random sampling baseline, UNICBE saves over 17% of evaluation budgets when achieving the same assessment accuracy (with a Pearson coefficient ex- ceeding 0.995 with the ground truth), demonstrating significantly better convergence and accuracy than baselines. Furthermore, in scenarios where new models are continuously introduced, UNICBE save over 50% of evaluation costs compared to random sampling, showcasing excellent scalability. ', 'original_lines': 'achieve CBE with better accuracy, convergence and scalability. To comprehensively validate the effectiveness and generalizability of UNICBE, we conduct multiple experiments involving various types of judges (LLMs and humans), different benchmarks, varied model sets to be evaluated, diverse scenarios (static and dynamic), and multiple evaluation metrics. The main results indicate that, compared to the random sampling baseline, UNICBE saves over 17% of evaluation budgets when achieving the same assessment accuracy (with a Pearson coefficient exceeding 0.995 with the ground truth), demonstrating significantly better convergence and accuracy than other baselines. Furthermore, in scenarios where new models are continuously introduced, UNICBE can even save over 50% of evaluation costs compared to random sampling, showcasing excellent scalability. 2 ModelsSamplesdbcheifgjmlk... ... Alloc-atingamJudgePrefe-renceResults... score0.30.50.10.7✖️ T times (preference budget) aggre-gation... >>...certainobjective(e.g., Accuracy)preference order tuple Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': 10, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'where ηmt,-,sk represents the bias between the observed model score ut of mt and the ground truth ˆut when sorely assessing on sample sk. To verify this, we conduct experiments on the AlpacaEval benchmark (Dubois et al., 2024) using GPT-4o (OpenAI, 2024) as the judge across randomly se- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Figure 2: Analyses of potential sampling bias risks in CBE. ', 'modified_lines': '', 'original_lines': 'Bias across Samples. Since different models may excel at answering different types of queries, the model scores can vary depending on the sampled data: ut = f pa({(mi, mj, sk, ri,j,k)}i∈1:M,j∈i+1:M )t = ˆut + ηmt,-,sk for ∀ t, k (1) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.2 ACCURACY', 'after_section': '3.2 ACCURACY', 'context_after': '(2) We validate this from two perspectives: (1) We calculate the average |ηmi,mj ,-| according to equa- tion 2 like the process above and show the results in Figure 2(a). Overall, although the bias across models is significantly lower than the bias across samples, it still exists at a scale around 0.05. We further visualize the pair-wise model score bias in Figure 2(c) to validate its wide existence. (2) We obtain over 1.7 million pairwise preference results across 129 LLMs collected by Chatbot Arena *. ', 'paragraph_idx': 21, 'before_section': '3.2 ACCURACY', 'context_before': 'ui = f pa({(mi, mj, sk, ri,j,k)}k∈1:N )i = ˆui + ηmi,mj ,- ', 'modified_lines': 'for ∀ i, j ', 'original_lines': ' for ∀ i, j 4 �푎�푝푎�퐸�푝푎�퐵푝푎average win rate:Elo rating:BT score:0.050.000.100.150.200.250.30bias across samples:bias across models:|���,−,��||���,��,−|0.00-0.050.05-0.100.10-0.150.15-0.200.20-0.250.25-0.300.30-0.350.35-0.400.40-0.450.45-0.500.50-0.55interval of |t,,k|0.000.050.100.150.200.250.30Proportionalpaca-7bllama-2-13b-chat-hfmistral-large-2402mistral-mediumgpt-4-0125-previewguanaco-7bguanaco-13bguanaco-33bguanaco-65bllama-2-7b-chat-hfopenchat-13bbaichuan-13b-chatbaize-v2-7bbaize-v2-13bclaudeclaude-2claude-2.1gemini-progpt-3.5-turbo-0301gpt-3.5-turbo-0613gpt-3.5-turbo-0613gpt-3.5-turbo-0301gemini-proclaude-2.1claude-2claudebaize-v2-13bbaize-v2-7bbaichuan-13b-chatopenchat-13bllama-2-7b-chat-hfguanaco-65bguanaco-33bguanaco-13bguanaco-7bgpt-4-0125-previewmistral-mediummistral-large-2402llama-2-13b-chat-hfalpaca-7b−0.4−0.200.20.4Loading [MathJax]/extensions/MathMenu.js Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': 21, 'before_paragraph_idx': 21}, {'section': '3.2 ACCURACY', 'after_section': None, 'context_after': 'Uniform Allocation Brings the Least Bias. Based on the discussions above, we analyze the bud- get allocation strategy that can introduce the least bias. Considering the presence of sampling bias, the estimation error of ui with T evaluation budget can be expressed as follows: ', 'paragraph_idx': 23, 'before_section': None, 'context_before': 'non-transitivity in 81 model triplets (win rate: A > B, B > C, C > A), which also verifies the existence of bias across models. ', 'modified_lines': '*https://storage.googleapis.com/arena_external_data/public/clean_ battle_20240814_public.json 4 �푎�푝푎�퐸�푝푎�퐵푝푎average win rate:Elo rating:BT score:0.050.000.100.150.200.250.30bias across samples:bias across models:|���,−,��||���,��,−|0.00-0.050.05-0.100.10-0.150.15-0.200.20-0.250.25-0.300.30-0.350.35-0.400.40-0.450.45-0.500.50-0.55interval of |t,,k|0.000.050.100.150.200.250.30Proportionalpaca-7bllama-2-13b-chat-hfmistral-large-2402mistral-mediumgpt-4-0125-previewguanaco-7bguanaco-13bguanaco-33bguanaco-65bllama-2-7b-chat-hfopenchat-13bbaichuan-13b-chatbaize-v2-7bbaize-v2-13bclaudeclaude-2claude-2.1gemini-progpt-3.5-turbo-0301gpt-3.5-turbo-0613gpt-3.5-turbo-0613gpt-3.5-turbo-0301gemini-proclaude-2.1claude-2claudebaize-v2-13bbaize-v2-7bbaichuan-13b-chatopenchat-13bllama-2-7b-chat-hfguanaco-65bguanaco-33bguanaco-13bguanaco-7bgpt-4-0125-previewmistral-mediummistral-large-2402llama-2-13b-chat-hfalpaca-7b−0.4−0.200.20.4Loading [MathJax]/extensions/MathMenu.js Published as a conference paper at ICLR 2025 ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '4.3 PREFERENCE AGGREGATION As discussed in §2, mainstream preference aggregation strategies include averaging win rate f pa ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'g , which can avoid the suboptimal achievement ', 'modified_lines': '', 'original_lines': '6 Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.2 MAIN RESULTS', 'after_section': None, 'context_after': '5.2 MAIN RESULTS Accuracy and Convergence. The results of compared CBE methods on AlpacaEval benchmark with GPT4-turbo as the judge are shown in Figure 3. To better illustrate the results, we also calculate the percentage of preference budget saved by each method compared to RANDOM baseline when achieving the same performance. In terms of performance, ALPACAEVAL << RANDOM < ARENA < UNICBE. To understand the differences in the performance of each method, we quantitatively analyze them based on the guidelines summarized in § 3. To achieve accuracy, convergence, and ', 'paragraph_idx': 38, 'before_section': None, 'context_before': 'absolute error of the estimated win rate. rs and rp denote the Spearman and Pearson correlations between the the estimated model scores and the ground truth respectively. ', 'modified_lines': ' †https://tatsu-lab.github.io/alpaca_eval/, https://lmarena.ai/ 7 123456780.010.020.030.040.050.06UniCBERandomArenaAlpaca123456780.97250.97500.97750.98000.98250.98500.98750.99000.9925rs123456780.98250.98500.98750.99000.99250.99500.9975rp123456780.0000.0250.0500.0750.1000.1250.1500.175Save Percentage Under 123456780.000.050.100.150.200.25Save Percentage Under rs123456780.0000.0250.0500.0750.1000.1250.1500.175Save Percentage Under rp Published as a conference paper at ICLR 2025 ', 'original_lines': '†https://tatsu-lab.github.io/alpaca_eval/, https://lmarena.ai/ 7 123456780.010.020.030.040.050.06UniCBERandomArenaAlpaca123456780.97250.97500.97750.98000.98250.98500.98750.99000.9925rs123456780.98250.98500.98750.99000.99250.99500.9975rp123456780.0000.0250.0500.0750.1000.1250.1500.175Save Percentage Under 123456780.000.050.100.150.200.25Save Percentage Under rs123456780.0000.0250.0500.0750.1000.1250.1500.175Save Percentage Under rp Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.4 GENERALIZABILITY UNDER DIFFERENT SETTINGS', 'after_section': None, 'context_after': 'Varied Number of Models and Samples. Finally, as shown in Figure 7, we conducte experiments by varying the number of models M and samples N . It can be observed that UNICBE achieves significantly better results compared to all the baselines under these settings, especially when M ', 'paragraph_idx': 50, 'before_section': None, 'context_before': 'cluded in MT-Bench, the experimental results show relatively larger fluctuations. The results above demonstrate the good generalizability of UNICBE across different judges and the data domain. ', 'modified_lines': 'Figure 6: Results of compared CBE methods with GPT-3.5-turbo as the judge on AlpacaEval (above) and human as the judge on MT-Bench (below). 9 123456780.0100.0150.0200.0250.0300.0350.0400.045UniCBEUniCBE w ftspUniCBE w/o PaccUniCBE w fpaEloUniCBE w/o PconRandomUniCBE w fpaAccUniCBE w/o Psca123456780.97250.97500.97750.98000.98250.98500.98750.99000.9925rs123456780.98250.98500.98750.99000.99250.99500.9975rp123456780.0000.0250.0500.0750.1000.1250.1500.175Save Percentage Under UniCBEUniCBE w/o PaccUniCBE w/o PconUniCBE w/o PscaUniCBE w ftspUniCBE w fpaEloUniCBE w fpaAccRandom123456780.000.050.100.150.200.25Save Percentage Under rs123456780.0000.0250.0500.0750.1000.1250.1500.175Save Percentage Under rp123456780.010.020.030.040.050.06UniCBERandomArenaAlpaca123456780.000.020.040.060.080.100.120.140.16Save Percentage Under rs123456780.0000.0250.0500.0750.1000.1250.1500.175Save Percentage Under rp0.10.20.30.40.50.60.70.020.040.060.080.10UniCBERandomArenaAlpaca0.20.30.40.50.60.70.000.050.100.150.200.25Save Percentage Under rs0.20.30.40.50.60.70.000.020.040.060.08Save Percentage Under rp Published as a conference paper at ICLR 2025 ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '6 CONCLUSIONS ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'are obtained with GPT-4o as the judge on AlpacaEval. Figure 8: Performance of CBE methods with list-wise preference of GPT-4o on AlpacaEval. ', 'modified_lines': '', 'original_lines': ' 10 123456780.010.020.030.040.050.06UniCBERandomArenaAlpaca123456780.000.020.040.060.080.100.120.140.16Save Percentage Under rs123456780.0000.0250.0500.0750.1000.1250.1500.175Save Percentage Under rp0.10.20.30.40.50.60.70.020.040.060.080.10UniCBERandomArenaAlpaca0.20.30.40.50.60.70.000.050.100.150.200.25Save Percentage Under rs0.20.30.40.50.60.70.000.020.040.060.08Save Percentage Under rp123456780.010.020.030.040.050.06UniCBERandomArenaAlpaca123456780.010.020.030.040.05123456780.010.020.030.040.050.06123456780.0000.0250.0500.0750.1000.1250.1500.175Save Percentage Under 123456780.0000.0250.0500.0750.1000.1250.150Save Percentage Under 123456780.0000.0250.0500.0750.1000.1250.150Save Percentage Under 123450.010.020.030.040.05UniCBERandomArena123450.9700.9750.9800.9850.9900.995rs123450.9750.9800.9850.9900.9951.000rp123450.000.050.100.150.200.250.30Save Percentage Under 123450.000.050.100.150.200.250.300.35Save Percentage Under rs123450.000.050.100.150.200.250.30Save Percentage Under rp Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-03 06:34:06
ICLR.cc/2025/Conference
Im0t6uFMyy
3qr5sQAyT2
[]
2024-11-24 08:03:21
ICLR.cc/2025/Conference
3qr5sQAyT2
NPbOZaqgWr
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'In summary, our main contributions are as follows: ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'egocentric view, representing human motion in multi-person scenario necessitates both egocentric and absolute information. ', 'modified_lines': 'Several works have focused on human interaction domain. For instance, InterFormer (Chopin et al., 2023) proposes injecting human skeleton priors into transformer attention layers for effective spatial modeling. InterGen (Liang et al., 2024) introduces a mutual attention mechanism within diffusion process for joint action-reaction generation. However, these methods are not directly applicable to real-world applications, as they rely on extra prompts to condition the generation process. ReGen- Net (Xu et al., 2024b), similar to our approach, acknowledges the online and unprompted nature of reaction generation, and proposes a diffusion-based model for online reaction generation. It ob- serves that given the action’s intention as a condition explicitly, the model can achieve superior performance compared to unprompted settings, highlighting the necessity of understanding inter- action semantics for reaction generation. However, ReGenNet directly models action-to-reaction generation process, without inferring action intention, thus achieving subpar performance. To address these challenges, we propose Think-Then-React model (TTR), an LLM-based model designed to predict human reactions in online and unprompted settings with the following innova- tions: First, to unifiedly represent human motion in multi-person scenario, we propose decoupled space-pose tokenizers that separately handle egocentric pose features and absolute space features. Specifically, we train a VQ-VAE (Van Den Oord et al., 2017) to encode egocentric human pose se- quences (i.e., the space features are normalized, to ensure codebook utilization) into LLM-readable tokens. To maintain spatial features which are crucial in multi-person interaction scenarios, we propose a space tokenizer that encodes positions and orientations as space tokens. We concatenate initial space tokens as prefixes to pose sequences, indicating the initial absolute state of an egocentric motion sequence. Second, to stabilize reaction prediction process, we introduce a novel framework that is capable to automatically infer text prompts for reaction generation. Specifically, TTR unifies two processes within one model: a thinking process that infers action intent and reasons reaction description, and a reacting process that takes both the action motion and inferred prompts as in- put, to generate precise and semantically appropriate reactions. Third, to adapt a language model to motion modality, we design a multi-task and multi-stage training pipeline consisting of motion- text, space-pose and motion-motion generation tasks. With our proposed training strategy, TTR is capable to effectively build correlations between text, motion and space modalities. ', 'original_lines': 'Several works have focused on the human-human interaction domain. For instance, Inter- Former (Chopin et al., 2023) proposes injecting human skeleton priors into transformer attention layers for effective spatial modeling. InterGen (Liang et al., 2024) introduces a mutual attention mechanism within diffusion process for joint action-reaction generation. However, these methods are not directly applicable to real-world applications, as they rely on extra prompts to condition the generation process. ReGenNet (Xu et al., 2024b), which is most similar to our approach, acknowl- edges the online and unprompted nature of reaction generation, and proposes a diffusion-based model for online reaction generation. It observes that explicitly given the action’s intention as a con- dition, the model can achieve superior performance compared to unprompted settings, highlighting the necessity of understanding interaction semantics for reaction generation. However, ReGenNet directly models action-to-reaction generation process, without explicitly inferring action intention, thus achieving subpar performance. To address these challenges, we propose Think-Then-React (TTR), an LLM-based model designed to predict human reactions in online and unprompted settings with the following innovations: First, to unifiedly represent human motion in multi-person scenario, we propose decoupled space-pose tokenizers that separately handle egocentric pose features and absolute space features. Specifically, we train a VQ-VAE (Van Den Oord et al., 2017) to encode egocentric human pose sequences (i.e., the space features are normalized, to ensure codebook utilization) into LLM-readable tokens. To maintain spatial features which are crucial in multi-person interaction scenarios, we propose a space tokenizer that encodes 2D positions and human body orientations in the world frame as space tokens. We then concatenate initial space tokens as prefixes to pose sequences, indicating the initial absolute state before an egocentric motion. Second, to stabilize reaction prediction process, we introduce a novel framework that is capable to automatically infer text prompts for reaction generation. Specif- ically, TTR unifies two processes within one model: a thinking process that infers action intent and reasons reaction description, and a reacting process that takes both the action motion and in- ferred prompts as input, to generate precise and semantically appropriate reactions. Third, to adapt a language model to motion modality, we design a multi-task and multi-stage training pipeline con- sisting of motion-text, space-pose and motion-motion generation tasks. With our proposed training strategy, TTR is capable to effectively build correlations between text, motion and space modalities. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 4}, {'section': '3.2 UNIFIED MOTION REPRESENTATION', 'after_section': '3.2 UNIFIED MOTION REPRESENTATION', 'context_after': 'maintain absolute space features. The y-axis (vertical) is not included, as few motions begin in a “floating” state. Based on pose and space features of p1 and p2, we propose a unified tokenizing pipeline to convert them into LLM-readable tokens. ', 'paragraph_idx': 16, 'before_section': '3.2 UNIFIED MOTION REPRESENTATION', 'context_before': 'the x-z plane represents the horizontal plane and y-axis represents the vertical direction, we nor- malize their centers at the origin while facing positive z axis. Then for each frame, we extract the 3D skeletons’ joint position, velocity and rotation as normalized (or egocentric) pose feature. Be- ', 'modified_lines': 'fore normalizing, we keep the two persons’ pelvis 2D coordination x, z and body orientation r to ', 'original_lines': " 4 <𝐱𝟐><𝐳𝟓><𝐫𝟔> <𝒑𝟑𝟐><𝒑𝟏𝟖><𝒑𝟐𝟒><𝒑𝟏𝟗><𝑝32><𝑝18><𝑝24><𝑝19><𝑥3><𝑧1><𝑟1><𝑥6><𝑧3><𝑟6>SpaceNormalizePose TokenizerSpace Tokenizer𝑧𝑥𝑧𝑥Trans & Rotate Think-Then-ReactInfer interaction from action:The person approaches, the other person stands still.Think-Then-ReactPredict reaction to action: <𝑥2><𝑧5><𝑟6> <𝑝32><𝑝18><𝑝13><𝒑𝟐𝟑>(c) TTR inference stage with re-thinking<𝑥2><𝑧5><𝑟6> <𝑝32><𝑝18><𝑥2><𝑧5><𝑟6> <𝑝32><𝑝18><𝑝24>𝑇=2𝑇=3The person is patting the other’s shoulder, the other person turns around.Think-Then-ReactPredict reaction to action: <𝑥2> <𝑧5><𝑟6> <𝑝32><𝑝18><𝑝24>Infer interaction from action:<𝑝13><𝑝23><𝒑𝟏𝟕>……“The person pats the other's left shoulder from behind,who turns and waves back.”𝑧𝑥(b) Our unified motion tokenizer<𝐱𝟎><𝐳𝟎><𝐫𝟎> <𝒑𝟏𝟑><𝒑𝟐𝟑><𝒑𝟏𝟕><𝒑𝟖>“The person pats the other's left shoulder from behind,who turns and waves back.”𝑧𝑥ActionTokenizeReaction(a) Action-reaction tokenizing Under review as a conference paper at ICLR 2025 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 fore normalizing, we keep the two persons’ pelvis 2D coordination x, z and body orientation r, to ", 'after_paragraph_idx': 16, 'before_paragraph_idx': 16}, {'section': '3.2 UNIFIED MOTION REPRESENTATION', 'after_section': '3.2 UNIFIED MOTION REPRESENTATION', 'context_after': '3.2.2 ABSOLUTE SPACE TOKENIZER ', 'paragraph_idx': 18, 'before_section': '3.2 UNIFIED MOTION REPRESENTATION', 'context_before': 'Furthermore, for smoother reconstructed motion and a stable training process, we add an extra veloc- ity regularization in the reconstruction loss and employ exponential moving average (EMA) Hunter (1986) with codebook reset techniques, following Zhang et al. (2023). More details about this sec- ', 'modified_lines': 'tion are provided in Section A.1. ', 'original_lines': 'tion are provided in the appendix. ', 'after_paragraph_idx': 18, 'before_paragraph_idx': 18}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Finally, we use a unified coding system to represent action, reaction, and their relative information. 3.3 UNIFIED LLM BASED MOTION UNDERSTANDING AND GENERATION 3.3.1 PRE-TRAINING (1) Motion - Text. To enable the model to understand and generate human motion, we com- bine the action and reaction token sequences to construct prompts, which are then fed into the ', 'paragraph_idx': 4, 'before_section': None, 'context_before': 'pose tokens with absolute space information, we propose converting position and rotation of a per- son’s center point into LLM-readable tokens. ', 'modified_lines': 'As shown in Figure 2, before normalizing a human motion, we first extract the center point’s fea- tures, i.e., the position x and z and orientation r. We then compute the range of x, z, and r across the dataset to get the maximum and minimum values. These ranges are uniformly divided into Nb bins, converting each continuous value to discrete tokens. For example, x = 0.55 will be represented as “<x15>” if all the x positions are in range [−1, 1] with Nb = 20 bins. Specifically, for each motion, we apply absolute space tokenizer to encode initial x, z, and r into egocentric pose tokens, and apply pose tokenizer to encode the following pose sequence, i.e., the following motion, into pose tokens. Such tokens enable training a model that can understand and generate motion and language simultaneously effectively and efficiently in the subsequent phase. To adapt a language model into a motion-language model, we first pre-train the model with multiple tasks in diverse formats. The pre-training tasks can be categorized into three main types: ', 'original_lines': 'As shown in Figure 2, before normalizing the human motion, we first extract the center point’s features, i.e., the position x and z and orientation r. We then compute the range of x, z, and r across the dataset to get the maximum and minimum values. These ranges are uniformly divided into Nb bins, converting each continuous value to discrete tokens. For example, x = 0.55 will be represented as token “<x15>” when all the x positions are in [−1, 1] and divided into Nb = 20 bins. Specifically, at each timestep t, we apply absolute space tokenizer to encode x, z, and r of the center point at the beginning into egocentric pose tokens, and apply pose tokenizer to encode a series of normalized motions before next timestep t+1 into pose tokens. Such tokens enable training a model that can understand and generate motion and language simultaneously effectively and efficiently in the subsequent phase. To adapt a large language model into a motion-language model, we first pre-train the model with multiple tasks in diverse formats. The pre-training tasks can be categorized into three main types: 5 Under review as a conference paper at ICLR 2025 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '4 EXPERIMENT We evaluate our proposed method with strong baselines and further analyze contributions of different ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'process ensures alignment between training and inference, as ground-truth prompts are inaccessible during inference. ', 'modified_lines': '', 'original_lines': '6 Under review as a conference paper at ICLR 2025 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '4.2 COMPARISON TO BASELINES As shown in the upper side of Table 1, our method TTR significantly outperforms baseline meth- ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'generation metrics for early-stopping, resulting about 100K pre-training steps and 40K fine-tuning steps. We set the re-thinking interval Nr to 4 tokens and divide each space signal into Nb = 10 bins. ', 'modified_lines': '', 'original_lines': '7 Under review as a conference paper at ICLR 2025 Table 1: Comparison to state-of-the-art baselines and ablation studies of our method on Inter-X dataset. ↑ or ↓ denotes a higher or lower value is better, and → means that the value closer to real is better. We use ± to represent 95% confidence interval and highlight the best results in bold. For ablation methods (in grey), PT, M, P, S, and SP are abbreviations for pre-training, motion, pose, space, and single-person data, respectively. Methods Real InterFormer MotionGPT InterGen ReGenNet TTR(Ours) w/o Think w/o All PT. w/o M-M PT. w/o P-S PT. w/o M-T PT. w/o SP Data Top-1 R-Precision↑ Top-2 Top-3 Acc.↑ FID↓ MMDist↓ Div.→ 0.511±.003 0.682±.002 0.776±.002 0.463±.000 0.000±.000 5.348±.002 2.498±.005 0.172±.012 0.238±.003 0.326±.036 0.384±.005 0.423±.005 0.367±.003 0.398±.007 0.408±.005 0.417±.004 0.406±.003 0.414±.004 0.292±.013 0.354±.004 0.423±.063 0.483±.002 0.599±.003 0.491±.027 0.531±.002 0.563±.004 0.582±.004 0.557±.004 0.592±.005 0.343±.012 0.441±.003 0.525±.053 0.572±.003 0.693±.003 0.584±.008 0.628±.003 0.646±.005 0.664±.004 0.637±.004 0.685±.003 0.171±.009 0.186±.002 0.254±.019 0.297±.004 0.318±.003 0.230±.036 0.288±.002 0.293±.002 0.308±.003 0.304±.003 0.315±.004 10.468±.021 5.823±.048 5.506±.257 3.988±.048 1.942±.017 3.828±.016 3.467±.113 2.874±.020 2.685±.024 2.580±.021 2.007±.015 7.831±.018 6.211±.005 6.182±.038 5.867±.009 5.643±.003 6.186±.055 5.822±.003 5.736±.003 5.699±.004 5.822±.003 5.667±.003 3.505±.023 2.615±.007 2.284±.009 2.502±.001 2.629±.006 2.609±.006 2.909±.053 2.553±.006 2.859±.007 2.889±.005 2.611±.005 Figure 3: Visualization of a person’s motion sequences in Inter-X dataset and HumanML3D dataset. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': '4.5 IMPACT OF DOWN-SAMPLING PARAMETER IN MATCHING MODEL FOR EVALUATION ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ally belongs to symmetrical interactions, e.g., pulling or being pulled; whereas, when actions are far from reactions, the motion usually belongs to asymmetrical interaction, e.g., massage. ', 'modified_lines': '', 'original_lines': "9 (c) The first person runs towards the other andknocksher/his left shoulder against the right shoulder, and the second person isforced to step back.(a) Two people stand facing each other. One person approaches and opens her/his arms toembracethe other person's back and waist, while the other personimitatesthe same action.(b) The first personpushesthe second person heavily on the back with both hands, causing her/him tobe pushed forward several steps.(d) The first person grabs the other person's waist, the second personwrestleswith the first person. Under review as a conference paper at ICLR 2025 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 Figure 5: summed ranking score diferences. Impact of input action FPS to Figure 6: Impact of re-thinking interval to FID and average inference time per step (AITS). ", 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'leverage a enhanced Thinking model as mentioned in Section A.3, and the FID decreases from 1.94 to 1.88, proving that a better thinking process leads could promote the following Reacting process. Moreover, when discarding the Thinking process, our model dramatically deteriorates in reaction ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0.318±.003 0.230±.036 ', 'modified_lines': '', 'original_lines': 'Figure 7: User preference between TTR and Re- GenNet on different motion duration. Figure 8: The validation loss curves of different tasks of TTR. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-02-19 07:45:48
ICLR.cc/2025/Conference
NPbOZaqgWr
yT9Hiy8u6Y
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ABSTRACT Modeling human-like action-to-reaction generation has significant real-world ap- ', 'modified_lines': 'plications, like human-robot interaction and games. Despite recent advancements in single-person motion generation, it is still challenging to well handle action-to- reaction generation, due to the difficulty of directly predicting reaction from action sequence without prompts, and the absence of a unified representation that effec- tively encodes multi-person motion. To address these challenges, we introduce Think-Then-React (TTR), a large language-model-based framework designed to generate human-like reactions. First, with our fine-grained multimodal training strategy, TTR is capable to unify two processes during inference: a thinking pro- cess that explicitly infers action intentions and reasons corresponding reaction description, which serve as semantic prompts, and a reacting process that pre- dicts reactions based on input action and the inferred semantic prompts. Second, to effectively represent multi-person motion in language models, we propose a unified motion tokenizer by decoupling egocentric pose and absolute space fea- tures, which effectively represents action and reaction motion with same encod- ing. Extensive experiments demonstrate that TTR outperforms existing baselines, achieving significant improvements in evaluation metrics, such as reducing FID from 3.988 to 1.942. ', 'original_lines': 'plications, like human-robot interaction and games. Despite recent advance- ments in single-person motion generation, it is still challenging to well handle action-to-reaction generation, due to the difficulty of directly predicting reaction from action sequence without prompts, and the absence of a unified representa- tion that effectively encodes multi-person motion. To address these challenges, we introduce Think-Then-React (TTR), a large language-model-based framework designed to generate human-like reactions. First, with our fine-grained mul- timodal training strategy, TTR is capable to unify two processes during infer- ence: a thinking process that explicitly infers action intentions and reasons cor- responding reaction description, which serve as semantic prompts, and a react- ing process that predicts reactions based on input action and the inferred se- mantic prompts. Second, to effectively represent multi-person motion in lan- guage models, we propose a unified motion tokenizer by decoupling egocentric pose and absolute space features, which effectively represents action and reac- tion motion with same encoding. Extensive experiments demonstrate that TTR outperforms existing baselines, achieving significant improvements in evaluation metrics, such as reducing FID from 3.988 to 1.942. Source code is available at https://github.com/AlbertTan404/Think-Then-React. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': "1 Think-Then-ReactMotion TokenizerThe person stands still, the other person stands in front of her/him.<𝑝32>𝑡=1𝑡=3𝑡=5𝑡=7<𝑝32>“The first person reaches her/his right hand, she/he grasps the other person's hands and they shaketwice.”“The person raises her/his right hand. The other person extends her/his hands and give her/him a high-five.”“Two people raise their right hands. They gave each other a high-fiveabove their heads.”<𝑝79><𝑝24><𝑝83>𝑡=3𝑡=5𝑡=7Thinking ResultsTimestepReaction<𝑝48><𝑝96><𝑝83>ActionInput ActionPredicted Reaction𝑡=2𝑡=4𝑡=6 Published as a conference paper at ICLR 2025 in the domain of human motion generation especially single-person motion generation, conditioned on text prompts (Guo et al., 2024; 2022b; Zhang et al., 2023) and action labels (Xu et al., 2023; Guo et al., 2020). Leveraging well-annotated human motion datasets (Xu et al., 2024a; Guo et al., ", 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'Predicting human reaction to human action in real world scenario is an online and unconstrained task, i.e., future states and text prompts are inaccessible, and it has board applications in virtual re- ', 'modified_lines': 'ality, human-robot interaction and gaming. Recently, significant advancements have been achieved ', 'original_lines': 'ality, human-robot interaction and gaming. Recently, significant advancements have been achieved ', 'after_paragraph_idx': None, 'before_paragraph_idx': 3}]
2025-02-19 12:11:39
ICLR.cc/2025/Conference
yT9Hiy8u6Y
d8dUUtfjMg
[]
2025-02-28 13:12:53
ICLR.cc/2025/Conference
QfTKQ0CJTB
YJIQkxkRAJ
[]
2024-11-28 09:47:46
ICLR.cc/2025/Conference
sQWqQI9GBf
4fhRXB86XR
[{'section': '3 MODEL ARCHITECTURE OF MAMBA RETRIEVER', 'after_section': None, 'context_after': '4 SYNTHETIC DATA GENERATION ', 'paragraph_idx': 17, 'before_section': '3 MODEL ARCHITECTURE OF MAMBA RETRIEVER', 'context_before': 'In this paper, we train Mamba retrievers to operate at sentence-level resolution and retrieve sen- tences. However, our model architecture is flexible and can be adapted to other levels of granularity. For example, when using paragraph-level resolution, the model would focus on the final token of ', 'modified_lines': 'each paragraph in the document. See a formal formulation of Mamba retriever in Appendix F. ', 'original_lines': 'each paragraph in the document. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 17}]
2024-11-28 05:35:26
ICLR.cc/2025/Conference
GcvJDfL95h
DMiE3AGDho
[{'section': '3.3 CLUSTERING AT INTERMEDIATE LAYERS', 'after_section': None, 'context_after': '3.3 CLUSTERING AT INTERMEDIATE LAYERS ', 'paragraph_idx': 31, 'before_section': '3.3 CLUSTERING AT INTERMEDIATE LAYERS', 'context_before': 'Figure 3b shows the results. We observe consistent U-shapes across LLMs and tasks, indicating that both beginning and ending tokens play important roles in prompts, with beginning tokens showing relatively stronger effects. These results again support the existence of clustering in Rd. Note that ', 'modified_lines': 'these results do not imply that middle tokens are not important (see Table 8 in Appendix F). ', 'original_lines': 'these results do not imply that middle tokens are not important (see Table 7 in Appendix F). ', 'after_paragraph_idx': None, 'before_paragraph_idx': 31}, {'section': '5 ACCELERATING SELECTION AND ORDERING OF IN-CONTEXT', 'after_section': None, 'context_after': '5.1 ', 'paragraph_idx': 50, 'before_section': '5 ACCELERATING SELECTION AND ORDERING OF IN-CONTEXT', 'context_before': 'results when ktotal = 16 and k = 4 in Appendix D to show the scalability of Cluster-based Search. Furthermore, in this section, we focus on non-instructed prompt to better align with previous discussions on the positions of demonstrations. Additional results regarding the instructed-prompt ', 'modified_lines': 'case are given in Table 7 in Appendix F. ', 'original_lines': 'case are given in Table 6 in Appendix F. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 50}, {'section': '5.2 ENTROPY-BASED SELECTION CRITERION', 'after_section': '5.2 ENTROPY-BASED SELECTION CRITERION', 'context_after': 'for the pseudocode of this criterion. Figure 7: Normalized accuracies and search ', 'paragraph_idx': 57, 'before_section': None, 'context_before': '(cid:1). is more confident than ℓ′ −1 ', 'modified_lines': 'Among all prompt candidates, we will select the most confident one. See Algorithm 1 in Appendix B ', 'original_lines': 'Among all prompt candidates, we will select the most confident one. See Algorithm B in Appendix B ', 'after_paragraph_idx': 57, 'before_paragraph_idx': None}, {'section': '5.2 ENTROPY-BASED SELECTION CRITERION', 'after_section': '5.2 ENTROPY-BASED SELECTION CRITERION', 'context_after': 'introduced in Section 2.2. Each task includes 1, 000 tuples (E, q). Average accuracies of different LLMs are reported in Table ??. On average, the performances of Random Selection are the worst, with large gaps compared to Exhaustive Search, showing the effectiveness of selecting and ordering. ', 'paragraph_idx': 58, 'before_section': None, 'context_before': '−1, i.e. c (ℓ−1) > c (cid:0)ℓ′ ', 'modified_lines': 'We run the Exhaustive Search and Cluster-based Search with Algorithm 1 on various ICL tasks ', 'original_lines': 'We run the Exhaustive Search and Cluster-based Search with Algorithm B on various ICL tasks ', 'after_paragraph_idx': 58, 'before_paragraph_idx': None}, {'section': '5.2 ENTROPY-BASED SELECTION CRITERION', 'after_section': None, 'context_after': '14 ', 'paragraph_idx': 57, 'before_section': None, 'context_before': 'approach allows for the identification of the optimal demonstration order that maximizes the model’s confidence in its predictions. ', 'modified_lines': 'Algorithm 1 Entropy-Based Selecting Criterion Input: set of prompt candidates P set cBest = −inf set pBest = None for p in P do compute logits ℓ−1 compute confidence score c (ℓ−1) if c (ℓ−1) > cBest then cBest = c (ℓ−1) pBest = p end if end for Output: pBest ', 'original_lines': '[H] Entropy-Based Selecting Criterion Input: set of prompt candidates P set cBest = −inf set pBest = None p in P compute logits ℓ−1 compute confidence score c (ℓ−1) c (ℓ−1) > cBest cBest = c (ℓ−1) pBest = p Output: pBest C SEARCH WITH IDEAL SELECTION CRITERION We report in Table 4 results of similar experiments as in Section 5.1 on different LLMs and ICL tasks. By comparing the performance of Cluster-based Search with Exhaustive Search, we can assess the effectiveness of our proposed method in finding optimal demonstration orders. Overall, the performances of Cluster-based Search are comparable to Exhaustive Search, while significantly reducing the search time complexity. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 THEORETICAL ANALYSIS', 'after_section': None, 'context_after': 'E LOW-DIMENSIONAL PROJECTIONS WITH T-SNE We re-build Figure 1 with UMAP projection replaced by t-SNE projection. The t-SNE projections, ', 'paragraph_idx': 33, 'before_section': None, 'context_before': 'for the efficiency of Cluster-based Search in selecting and ordering demonstrations for in-context learning, even when dealing with a larger pool of available demonstrations. ', 'modified_lines': 'We further conduct similar experiments with ktotal = 16 and k = 10 to show the scalability of our proposed Cluster Search. In this case, the Exhaustive Search is clearly infeasible. Experimental results are reported in Table 6. On average, the Cluster Search method is better than Random Search by 1.5% to 5.9%. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-28 07:34:47
ICLR.cc/2025/Conference
DMiE3AGDho
g8Pa0MvnR2
[{'section': '4.1 THEORETICAL ANALYSIS', 'after_section': None, 'context_after': '4.2 EMPIRICAL ANALYSIS ', 'paragraph_idx': 41, 'before_section': '4.1 THEORETICAL ANALYSIS', 'context_before': 'coexist: while the theoretical tendency toward first-token clustering manifests in early layers, the practical requirements of causal language modeling appear to lead to attention redistribution in later layers. When combined with positional encodings, this attention pattern may contribute to the ', 'modified_lines': 'observed dual clustering behavior - both first-demonstration and last-demonstration clustering. ', 'original_lines': 'observed dual clustering behavior - both first-demonstration and last-demonstration clustering.. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 41}, {'section': '5 ACCELERATING SELECTION AND ORDERING OF IN-CONTEXT', 'after_section': '5 ACCELERATING SELECTION AND ORDERING OF IN-CONTEXT', 'context_after': ':= ktotal! ', 'paragraph_idx': 47, 'before_section': '5 ACCELERATING SELECTION AND ORDERING OF IN-CONTEXT', 'context_before': 'DEMONSTRATIONS IN SELF-ADAPTIVE ICL SETTINGS ', 'modified_lines': 'Based on the clustering property, we propose an efficient approach to improve self-adaptive ICL methods (Wu et al., 2022). In particular, self-adaptive methods aim to optimize the selection and ordering of demonstrations based on model’s own predictions, without relying on external knowledge or supervision. However, this process typically suffers from factorial complexity. For example, when exhaustively selecting and ordering k out of ktotal demonstrations (referred to as Exhaustive Search), there are Ak (ktotal−k)! possibilities. In contrast, the first-demonstration clustering property suggests that prompts sharing the first demonstration are likely to have the same next-token prediction. Consequently, our approach, called Cluster-based Search, only requires selecting the first demonstration, while the rest can be randomly selected, resulting in merely ktotal possibilities. ', 'original_lines': 'Based on the clustering property, we propose an efficient approach to improve the self-adaptive ICL methods (Wu et al., 2022). In particular, self-adaptive methods aim to optimize the selection and ordering of demonstrations based on the model’s own predictions, without relying on external knowledge or supervision. However, this process typically suffers from factorial complexity. For example, when exhaustively selecting and ordering k out of ktotal demonstrations (referred to as Exhaustive Search), there are Ak (ktotal−k)! possibilities. In contrast, the first-demonstration clustering property suggests that prompts sharing the first demonstration are likely to have the same next-token prediction. Consequently, our approach, called Cluster-based Search, only requires selecting the first demonstration, while the rest can be randomly selected, resulting in merely ktotal possibilities. ', 'after_paragraph_idx': 48, 'before_paragraph_idx': 47}, {'section': '5 ACCELERATING SELECTION AND ORDERING OF IN-CONTEXT', 'after_section': None, 'context_after': '8 ', 'paragraph_idx': 50, 'before_section': '5 ACCELERATING SELECTION AND ORDERING OF IN-CONTEXT', 'context_before': 'based Search are comparable. Concretely, performances of Cluster-based Search only decrease relatively by 2.4% on average compared to Exhaustive Search, while search time of Cluster-based Search decreases by 91.7% (Figure 7). Note that with larger ktotal, the time saving is almost 100%. ', 'modified_lines': 'Additionally, the performance gaps between the ideal and the dumb criterion (number in parentheses in Table 1) are huge for all LLMs, further highlighting the efficiency of Cluster-based Search. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 50}, {'section': '5 ACCELERATING SELECTION AND ORDERING OF IN-CONTEXT', 'after_section': None, 'context_after': '5.2 ENTROPY-BASED SELECTION CRITERION ', 'paragraph_idx': 47, 'before_section': None, 'context_before': '89.6 Table 2: Accuracies (%) of Random Selection, Exhaustive Search, and Cluster-based Search with entropy-based ', 'modified_lines': 'selecting criterion on different LLMs and ICL tasks. The subscript numbers indicate the standard deviation over 10 runs. Due to computational constraints, we do not report standard deviations for Exhaustive Search. Bold: best; Underline: second best. Searching is obviously more effective than random selection. Moreover, the performances of Cluster-based Search are comparable to Exhaustive Search. ', 'original_lines': 'selecting criterion on different LLMs and ICL tasks. The subscript numbers indicate the standard deviation over 10 runs. Due to computational constraints, we do not report standard deviations for Exhaustive Search. Bold: best; Underline: second best. Searching is obviously more effective than random selection. Moreover, while Exhaustive Search achieves the best accuracies in most cases, the performances of Cluster-based Search are comparable. Additionally, the performance gaps between the ideal and the dumb criterion (number in parentheses in Table 1) are huge for all LLMs, further highlighting the efficiency of Cluster-based Search. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.2 ENTROPY-BASED SELECTION CRITERION', 'after_section': '5.2 ENTROPY-BASED SELECTION CRITERION', 'context_after': 'with large gaps compared to Exhaustive Search, showing the effectiveness of selecting and ordering. However, it is interesting to note that performances of Cluster-based Search are comparable to Exhaustive Search, with a slight absolute performance gap of 0.3% on average. This result once ', 'paragraph_idx': 58, 'before_section': '5.2 ENTROPY-BASED SELECTION CRITERION', 'context_before': 'We run the Exhaustive Search and Cluster-based Search with Algorithm 1 on various ICL tasks introduced in Section 2.2. Each task includes 1, 000 tuples (E, q). Average accuracies of different ', 'modified_lines': 'LLMs are reported in Table 2. On average, the performances of Random Selection are the worst, ', 'original_lines': 'LLMs are reported in Table ??. On average, the performances of Random Selection are the worst, ', 'after_paragraph_idx': 58, 'before_paragraph_idx': 58}, {'section': '5.2 ENTROPY-BASED SELECTION CRITERION', 'after_section': None, 'context_after': '9 ', 'paragraph_idx': 61, 'before_section': '5.2 ENTROPY-BASED SELECTION CRITERION', 'context_before': 'versus 88.2%. Most notably, this simpler approach reduces the computational complexity significantly - from O(ktotal(ktotal − 1)) when selecting both first and last demonstrations to just O(ktotal) when selecting only the first demonstration. This order of magnitude improvement in efficiency, combined ', 'modified_lines': 'with the maintained or improved accuracy, makes first-only clustering a clearly superior choice for demonstration selection. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 61}, {'section': 'Abstract', 'after_section': None, 'context_after': '6 RELATED WORK Recent research has unveiled the sensitivity to the order of in-context demonstrations in large language ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Table 3: Comparison between First-only and First-Last clustering search strategies across different models and tasks, showing accuracy (%) on classification and reasoning tasks. ', 'modified_lines': '', 'original_lines': ' with the maintained or improved accuracy, makes first-only clustering a clearly superior choice for demonstration selection. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.2 ENTROPY-BASED SELECTION CRITERION', 'after_section': None, 'context_after': 'The serial-position effect, a fascinating psychological phenomenon where first and last items in a sequence are recalled best, has been a fruitful area of study. Pioneering work highlights how position ', 'paragraph_idx': 68, 'before_section': '5.2 ENTROPY-BASED SELECTION CRITERION', 'context_before': 'and knowledge transfer. Recent work reveal that even the simple optimization algorithm SGD subtly regularizes the model’s sensitivity (Lee et al., 2023) , while other studies manipulate the Jacobian to boost noise and attack resistance (Hoffman et al., 2019) . Beyond robustness, it has been highlighted ', 'modified_lines': 'how Jacobian regularization supports more realistic dynamics in the system (Finlay et al., 2020). ', 'original_lines': 'how Jacobian regularization supports smoother, more realistic dynamics in the system (Finlay et al., 2020). ', 'after_paragraph_idx': None, 'before_paragraph_idx': 68}, {'section': 'Abstract', 'after_section': None, 'context_after': '10 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'We study the prompt embedding space to understand the order sensitivity of ICL in decoder-only LLMs. Our analysis reveals that prompts sharing first or last demonstrations tend to form clusters in the embedding space, with first-demonstration clustering showing notably stronger effects. Our ', 'modified_lines': 'theoretical analysis suggests this asymmetry may be partially explained by the causal structure, as first-token information is reused O(n2) times in a self-attention layer. The last-demonstration clustering appears to emerge through a more complex interaction between the causal structure and positional encoding, though its precise mechanisms warrant further investigation. Based on these ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'insights, we introduce Cluster-based Search for demonstration selection and ordering in self-adaptive ICL methods, achieving comparable performances to exhaustive search while reducing computational complexity from factorial to quadratic. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '592 593 ', 'modified_lines': '', 'original_lines': 'theoretical analysis suggests this asymmetry may be partially explained by the causal structure, as first-token information is reused O(n2) times in a self-attention layer. The last-demonstration clustering appears to emerge through a more complex interaction between the causal structure and positional encoding, though its precise mechanisms warrant further investigation. Based on these ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Mojan Javaheripi, Sébastien Bubeck, Marah Abdin, Jyoti Aneja, Sebastien Bubeck, Caio César Teodoro Mendes, Weizhu Chen, Allie Del Giorno, Ronen Eldan, Sivakanth Gopi, et al. Phi-2: The surprising power of small language models. Microsoft Research Blog, 1(3):3, 2023. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '646 647 ', 'modified_lines': '', 'original_lines': 'Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Keming Lu, et al. Qwen2. 5-coder technical report. arXiv preprint arXiv:2409.12186, 2024. Romuald A Janik. Aspects of Human Memory and Large Language Models. arXiv preprint arXiv:2311.03839, 2023. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-28 08:01:59
ICLR.cc/2025/Conference
g8Pa0MvnR2
cLKZy30MFa
[{'section': '4.1 THEORETICAL ANALYSIS', 'after_section': None, 'context_after': '4.2 EMPIRICAL ANALYSIS ', 'paragraph_idx': 41, 'before_section': None, 'context_before': 'despite variations in first-token attention, suggests a fundamental architectural behavior aligned with the next-token prediction objective. ', 'modified_lines': 'This layerwise progression offers insights into how our theoretical and empirical findings coexist: while the theoretical tendency toward first-token clustering manifests in early layers (with attention weights of 0.8-0.9), the practical requirements of causal language modeling lead to attention redis- tribution in later layers, where we observe increased attention to the last token (reaching 0.1-0.2). This pattern suggests a dynamic balance between the model’s architectural bias toward first-token clustering and its need to capture sequence-final information for next-token prediction. This observa- tion aligns with prior findings that causal attention mechanisms can infer positional information even without explicit positional encoding (Kazemnejad et al., 2024) - a property that emerges naturally from the causal structure. When combined with positional encodings, this evolving attention pattern appears to contribute to the observed dual clustering behavior - both first-demonstration and last- demonstration clustering - though our experiments in Section 4.2 suggest this interaction is complex and dependent on the specific type of positional encoding used. ', 'original_lines': 'This layerwise progression offers insights into how our theoretical and empirical findings might coexist: while the theoretical tendency toward first-token clustering manifests in early layers, the practical requirements of causal language modeling appear to lead to attention redistribution in later layers. When combined with positional encodings, this attention pattern may contribute to the observed dual clustering behavior - both first-demonstration and last-demonstration clustering. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'tokens remains distinctively high when sinusoidal positional encoding is employed in the absence of a causal attention mask, this phenomenon is not observed for rotary and trainable positional encoding. This suggests that the importance of ending tokens is influenced by the interplay between the causal ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'attention mask. Specifically, the importance of beginning tokens is markedly elevated when, and only when, the causal attention mask is applied, which aligns with the findings presented in Proposition 4.1. On the other hand, the case for last-demonstration is more complex. While the importance of ending ', 'modified_lines': '', 'original_lines': ' 7 (c)(a)(b) Under review as a conference paper at ICLR 2025 Exhaustive Cluster GPT-Neo-2.7B 97.4 (1.8) Llama-v2-7B 98.5 (4.5) MPT-7B 98.7 (11.9) 94.0 95.6 97.8 Relative decrease 3.5 2.9 0.9 Table 1: Accuracies (%) of Exhaustive and Cluster-based Search with the ideal selecting criterion with different LLMs on reverse task. The first two columns show accuracies, while the third one shows the relative decreased percentage of Cluster-based Search compared to Exhaustive Search. Exhaustive Search achieves the best accuracies on all LLMs, but the performances of Cluster-based Search are comparable. Numbers in parentheses are accuracies with the dumb selection criterion. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '8 378 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'the accuracy upper bounds of Cluster-based Search are comparable to Exhaustive Search while significantly reducing the search time. ', 'modified_lines': '', 'original_lines': 'Accuracies of different LLMs on reverse task are reported in Table 1. Here we take the average accuracy over 1, 000 tuples of (E, q). See Appendix C for results of other tasks. While Exhaustive Search achieves the best accuracies on all LLMs, it is worth noticing that the performances of Cluster- based Search are comparable. Concretely, performances of Cluster-based Search only decrease relatively by 2.4% on average compared to Exhaustive Search, while search time of Cluster-based Search decreases by 91.7% (Figure 7). Note that with larger ktotal, the time saving is almost 100%. Additionally, the performance gaps between the ideal and the dumb criterion (number in parentheses in Table 1) are huge for all LLMs, further highlighting the efficiency of Cluster-based Search. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 ACCELERATING SELECTION AND ORDERING OF IN-CONTEXT', 'after_section': None, 'context_after': 'Figure 7: Normalized accuracies and search times of Exhaustive Search and Cluster-based Search with the ideal selecting criterion. −1, i.e. c (ℓ−1) > c (cid:0)ℓ′ We run the Exhaustive Search and Cluster-based Search with Algorithm 1 on various ICL tasks introduced in Section 2.2. Each task includes 1, 000 tuples (E, q). Average accuracies of different ', 'paragraph_idx': 49, 'before_section': None, 'context_before': 'Bold: best; Underline: second best. Searching is obviously more effective than random selection. Moreover, the performances of Cluster-based Search are comparable to Exhaustive Search. ', 'modified_lines': 'Accuracies of different LLMs on reverse task are reported in Table 1. Here we take the average accu- racy over 1, 000 tuples of (E, q). See Appendix C for results of other tasks. While Exhaustive Search achieves the best accuracies on all LLMs, it is worth noticing that the performances of Cluster-based Search are comparable. Concretely, performances of Cluster-based Search only decrease relatively by 2.4% on average compared to Exhaustive Search, while search time of Cluster-based Search decreases by 91.7% (Figure 7). Note that with larger ktotal, the time saving is almost 100%. Additionally, the performance gaps between the ideal and the dumb criterion (number in parentheses in Table 1) are huge for all LLMs, further highlighting the efficiency of Cluster-based Search. 5.2 ENTROPY-BASED SELECTION CRITERION We consider a more practical selecting criterion C, i.e. the popular entropy-based criterion (Lu et al., 2022), to show the effectiveness of Cluster-based Search in practice. To be concrete, denote ℓ−1 to be the logits of the first prediction step. The size of ℓ−1 is equal to the length of the token dictionary, and it is the output of the prediction head whose input is x−1. We define the confidence score of ℓ−1 to be c (ℓ−1) := −entropy (softmax (ℓ−1)). A permutation p of demonstrations is more confident than (cid:1). another permutation p′ if its associated logits ℓ−1 is more confident than ℓ′ Among all prompt candidates, we will select the most confident one. See Algorithm 1 in Appendix B for the pseudocode of this criterion. −1 ', 'original_lines': '5.2 ENTROPY-BASED SELECTION CRITERION We consider a more practical selecting criterion C, i.e. the popular entropy-based criterion (Lu et al., 2022), to show the effectiveness of Cluster-based Search in practice. To be concrete, denote ℓ−1 to be the logits of the first prediction step. The size of ℓ−1 is equal to the length of the token dictionary, and it is the output of the prediction head whose in- put is x−1. We define the confidence score of ℓ−1 to be c (ℓ−1) := −entropy (softmax (ℓ−1)). A per- mutation p of demonstrations is more confident than another permutation p′ if its associated logits ℓ−1 (cid:1). is more confident than ℓ′ −1 Among all prompt candidates, we will select the most confident one. See Algorithm 1 in Appendix B for the pseudocode of this criterion. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '6 RELATED WORK Recent research has unveiled the sensitivity to the order of in-context demonstrations in large language ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'with the maintained or improved accuracy, makes first-only clustering a clearly superior choice for demonstration selection. ', 'modified_lines': '', 'original_lines': '9 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 Under review as a conference paper at ICLR 2025 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 Classification SymSen SymLan Rev. Reasoning Rep. ComSen Math GPT-Neo-2.7B Phi-2 (2.7B) Qwen-2.5-14B First Cluster First-Last Cluster First Cluster First-Last Cluster First Cluster First-Last Cluster 51.5 55.5 30.8 28.1 82.3 79.3 74.6 75.7 72.3 72.2 87.1 83.7 76.3 63.7 77.3 78.9 99.3 99.2 55.3 53.2 65.0 69.3 97.7 97.0 16.6 17.8 64.2 62.7 84.3 84.0 1.0 1.0 77.9 78.5 87.1 86.2 Avg 45.9 44.5 64.6 64.9 89.6 88.2 Table 3: Comparison between First-only and First-Last clustering search strategies across different models and tasks, showing accuracy (%) on classification and reasoning tasks. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'insights, we introduce Cluster-based Search for demonstration selection and ordering in self-adaptive ICL methods, achieving comparable performances to exhaustive search while reducing computational complexity from factorial to quadratic. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'as first-token information is reused O(n2) times in a self-attention layer. The last-demonstration clustering appears to emerge through a more complex interaction between the causal structure and positional encoding, though its precise mechanisms warrant further investigation. Based on these ', 'modified_lines': '', 'original_lines': ' 10 Under review as a conference paper at ICLR 2025 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '11 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Solve Arithmetic Word Problems With Verb Categorization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 523–533, 2014. ', 'modified_lines': '', 'original_lines': 'HuggingFace. Language Identification Dataset, 2021. URL https://huggingface.co/ datasets/papluca/language-identification. Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Keming Lu, et al. Qwen2. 5-coder technical report. arXiv preprint arXiv:2409.12186, 2024. Romuald A Janik. Aspects of Human Memory and Large Language Models. arXiv preprint arXiv:2311.03839, 2023. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.2 TASKSWe consider two types of tasks: classification and reasoning.', 'after_section': None, 'context_after': 'Mojan Javaheripi, Sébastien Bubeck, Marah Abdin, Jyoti Aneja, Sebastien Bubeck, Caio César Teodoro Mendes, Weizhu Chen, Allie Del Giorno, Ronen Eldan, Sivakanth Gopi, et al. Phi-2: The surprising power of small language models. Microsoft Research Blog, 1(3):3, 2023. Sungyoon Lee, Jinseong Park, and Jaewook Lee. Implicit Jacobian Regularization Weighted With ', 'paragraph_idx': 12, 'before_section': None, 'context_before': '646 647 ', 'modified_lines': 'HuggingFace. Language Identification Dataset, 2021. URL https://huggingface.co/ datasets/papluca/language-identification. Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Keming Lu, et al. Qwen2. 5-coder technical report. arXiv preprint arXiv:2409.12186, 2024. Romuald A Janik. Aspects of Human Memory and Large Language Models. arXiv preprint arXiv:2311.03839, 2023. Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, and Siva Reddy. The impact of positional encoding on length generalization in transformers. Advances in Neural Information Processing Systems, 36, 2024. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-28 10:16:26
ICLR.cc/2025/Conference
cLKZy30MFa
EukqlfSzf0
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '(e.g. the sentence’s sentiment). The query q only consists of an input qin, and the LLM will predict its associated output, i.e. ˆqout = LLM(E, q). The prediction ˆqout is compared with the ground-truth output qout to determine whether the LLM answers the query correctly. In Section 3 and Section 4, we investigate LLMs’ behaviors on ICL prompts built upon the same query ', 'paragraph_idx': 6, 'before_section': None, 'context_before': 'In-context learning (ICL) An ICL prompt consists of an ordered sequence of k demonstrations E = (e1, . . . , ek) and a query q. Each demonstration ei includes an input ein i (e.g. a sentence) and its ', 'modified_lines': 'associated output eout ', 'original_lines': 'associated output eout ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'We consider two types of tasks: classification and reasoning. For text classification, we consider tasks of sentiment classification and language identification. We ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'i 2.2 TASKS ', 'modified_lines': ' ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'dxi dt ', 'paragraph_idx': 2, 'before_section': None, 'context_before': ' ', 'modified_lines': '', 'original_lines': 'e⟨Q(t)xi(t),K(t)xj (t)⟩V (t)xj(t) , (1) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 THEORETICAL ANALYSIS', 'after_section': '4.1 THEORETICAL ANALYSIS', 'context_after': 'for i = 1, n and t ≥ 0. Here Q(.), K(.), and V (.) are the query, key, and value matrices, Pxy is the projection of y ∈ Sd−1 onto the tangent space TxSd−1, and Zβ,i(t) := (cid:80)n k=1 eβ⟨Q(t)xi(t),K(t)xk(t)⟩ is the partition function. We further assume V (t) is identity for all t ≥ 0. Consider an initial states {xi(0)}n that satisfies the following hypothesis (H): there exists w ∈ Sd−1 such that ⟨w, xi(0)⟩ > 0 for all i = 1, n. Under the hypothesis (H), it has been proved that i=1 ∈ C 0 (cid:0)R≥0; (cid:0)Sd−1(cid:1)n(cid:1) is the unique solution of the corresponding Cauchy problem (1), if {xi(.)}n then there exists x∗ ∈ Sd−1 and constants C, λ > 0 such that ∥xi(t) − x∗∥ ≤ Ce−λt (Geshkovski et al., 2023). This means all items in the sequence {xi(t)}n i=1 become identical exponentially fast ', 'paragraph_idx': 36, 'before_section': '4.1 THEORETICAL ANALYSIS', 'context_before': '(cid:88) j=1 ', 'modified_lines': ' e⟨Q(t)xi(t),K(t)xj (t)⟩V (t)xj(t) , (1) 5 Published as a conference paper at ICLR 2025 Figure 5: Attention weights from the last token to the first token (blue) and last token (orange) with Qwen- 2.5-72B across layers and different tasks. The x-axis shows layer indices ranging 0-80; y-axis shows attention weights averaged over 100 prompts. Notably, attention to first token dominates early layers (0.8-0.9) aligning with Proposition 4.1, while attention to last token steadily increases and peaks in final layers (around 0.2). (Left) Mathematical arithmetic task. (Middle) Reverse task. (Right) Symbolic sentiment classification task. i=1 ∈ (cid:0)Rd(cid:1)n ', 'original_lines': ' i=1 ∈ (cid:0)Rd(cid:1)n 5 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 Under review as a conference paper at ICLR 2025 Figure 5: Attention weights from the last token to the first token (blue) and last token (orange) with Qwen- 2.5-72B across layers and different tasks. The x-axis shows layer indices ranging 0-80; y-axis shows attention weights averaged over 100 prompts. Notably, attention to first token dominates early layers (0.8-0.9) aligning with Proposition 4.1, while attention to last token steadily increases and peaks in final layers (around 0.2). (Left) Mathematical arithmetic task. (Middle) Reverse task. (Right) Symbolic sentiment classification task. ', 'after_paragraph_idx': 36, 'before_paragraph_idx': 36}, {'section': 'Abstract', 'after_section': None, 'context_after': 'i(0)}n′ i(0)}n′ ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'i=1 and {x′ ', 'modified_lines': '', 'original_lines': 'i=1 and {x′ i(0)}n′ ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '• Initial Phase (layers 1-40): The first token dominates attention with consistently high weights (0.8-0.9), showing remarkable stability in early layers. This strong initial convergence ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'arithmetic, reverse and symbolic sentiment classification. This comprehensive sampling ensures our findings are robust across different task types and input lengths. The attention pattern exhibits three distinct phases: ', 'modified_lines': '', 'original_lines': ' 6 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 010203040506070800.00.20.40.60.81.0Task: Matharithvs. first tokenvs. last token010203040506070800.00.20.40.60.81.0Task: Reverse010203040506070800.00.20.40.60.81.0Task: Symbolic-sentiment Under review as a conference paper at ICLR 2025 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 Figure 6: Partial derivative norms w.r.t. chunks in trained-from-scratch Transformers with different types of positional encodings. The x-axis is chunk indices ranging 0-9; y-axis is the partial derivative norms. (a) Sinusoidal positional encoding. (b) Rotary positional encoding. (c) Trainable positional encoding. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'We prepare 100 randomized prompts and compute the partial derivative norms similarly to Section 3.2. To ensure the prompts are differently distributed to training ones, we build each prompt as a sequence ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'namely the sinusoidal, rotary, and trainable positional encoding, on WikiText2 dataset with SGD optimizer with learning rate 5 · 10−1 in 100 epochs. For fair comparisons, the training task for all Transformers is next-token prediction. ', 'modified_lines': '', 'original_lines': ' 7 (c)(a)(b) Under review as a conference paper at ICLR 2025 Exhaustive Cluster GPT-Neo-2.7B 97.4 (1.8) Llama-v2-7B 98.5 (4.5) MPT-7B 98.7 (11.9) 94.0 95.6 97.8 Relative decrease 3.5 2.9 0.9 Table 1: Accuracies (%) of Exhaustive and Cluster-based Search with the ideal selecting criterion with different LLMs on reverse task. The first two columns show accuracies, while the third one shows the relative decreased percentage of Cluster-based Search compared to Exhaustive Search. Exhaustive Search achieves the best accuracies on all LLMs, but the performances of Cluster-based Search are comparable. Numbers in parentheses are accuracies with the dumb selection criterion. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Accuracies of different LLMs on reverse task are reported in Table 1. Here we take the average accu- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'the dumb criterion, which always select the wrong prompt if exists. In the following, we show the accuracy upper bounds of Cluster-based Search are comparable to Exhaustive Search while significantly reducing the search time. ', 'modified_lines': '', 'original_lines': ' 8 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 Under review as a conference paper at ICLR 2025 Classification Reasoning SymSen SymLan Rev. Rep. ComSen Math Avg GPT-Neo-2.7B Phi-2 (2.7B) Qwen-2.5-14B Random 51.34.7 Exhaustive 51.5 51.55.5 Cluster Random 23.13.7 Exhaustive 31.5 30.83.7 Cluster Random 71.73.0 Exhaustive 78.0 82.32.9 Cluster 65.24.0 77.0 74.63.3 59.34.8 76.0 72.35.1 76.32.8 86.0 87.13.3 56.65.1 66.5 76.35.1 77.94.9 80.5 77.33.5 99.10.8 100.0 99.30.9 49.54.1 49.0 55.35.6 53.03.4 71.0 65.04.6 93.32.4 96.0 97.71.6 17.33.1 18.7 16.63.5 61.24.0 63.5 64.23.9 84.33.4 84.2 84.32.5 1.00.8 0.5 1.00.8 75.14.0 77.0 77.94.2 85.13.2 85.5 87.13.0 40.2 43.9 45.9 58.3 66.6 64.6 85.0 88.3 89.6 Table 2: Accuracies (%) of Random Selection, Exhaustive Search, and Cluster-based Search with entropy-based selecting criterion on different LLMs and ICL tasks. The subscript numbers indicate the standard deviation over 10 runs. Due to computational constraints, we do not report standard deviations for Exhaustive Search. Bold: best; Underline: second best. Searching is obviously more effective than random selection. Moreover, the performances of Cluster-based Search are comparable to Exhaustive Search. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.1', 'after_section': None, 'context_after': 'Figure 7: Normalized accuracies and search times of Exhaustive Search and Cluster-based ', 'paragraph_idx': 59, 'before_section': '5.1', 'context_before': '2.4% on average compared to Exhaustive Search, while search time of Cluster-based Search decreases by 91.7% (Figure 7). Note that with larger ktotal, the ', 'modified_lines': 'time saving is almost 100%. Additionally, the perfor- mance gaps between the ideal and the dumb criterion (number in parentheses in Table 1) are huge for all LLMs, further highlighting the efficiency of Cluster-based Search. ', 'original_lines': 'time saving is almost 100%. Additionally, the performance gaps between the ideal and the dumb criterion (number in parentheses in Table 1) are huge for all LLMs, further highlighting the efficiency of Cluster-based Search. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 59}, {'section': 'Abstract', 'after_section': None, 'context_after': 'This is particularly evident in GPT-Neo-2.7B where first-only clustering achieves an average accuracy of 45.9% compared to 44.5% for first-last clustering, and in Qwen-2.5-14B where it achieves 89.6% versus 88.2%. Most notably, this simpler approach reduces the computational complexity significantly ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'The results demonstrate that selecting only the first demonstration for clustering-based search achieves comparable or sometimes better accuracy compared to selecting both first and last demonstrations. ', 'modified_lines': '', 'original_lines': ' 9 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 Under review as a conference paper at ICLR 2025 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 Classification SymSen SymLan Rev. Reasoning Rep. ComSen Math GPT-Neo-2.7B Phi-2 (2.7B) Qwen-2.5-14B First Cluster First-Last Cluster First Cluster First-Last Cluster First Cluster First-Last Cluster 51.5 55.5 30.8 28.1 82.3 79.3 74.6 75.7 72.3 72.2 87.1 83.7 76.3 63.7 77.3 78.9 99.3 99.2 55.3 53.2 65.0 69.3 97.7 97.0 16.6 17.8 64.2 62.7 84.3 84.0 1.0 1.0 77.9 78.5 87.1 86.2 Avg 45.9 44.5 64.6 64.9 89.6 88.2 Table 3: Comparison between First-only and First-Last clustering search strategies across different models and tasks, showing accuracy (%) on classification and reasoning tasks. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '7 CONCLUSION AND LIMITATIONS We study the prompt embedding space to understand the order sensitivity of ICL in decoder-only ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'and the recency effect, favoring recent ones (Glanzer and Cunitz, 1966). We give a brief discussion on the connection between the serial-position effect and our clustering property in Appendix A, and we believe that this research direction is interesting to explore further. ', 'modified_lines': '', 'original_lines': ' 10 Under review as a conference paper at ICLR 2025 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'HuggingFace. Language Identification Dataset, 2021. URL https://huggingface.co/ ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. Learning to Solve Arithmetic Word Problems With Verb Categorization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 523–533, 2014. ', 'modified_lines': '', 'original_lines': ' 11 Under review as a conference paper at ICLR 2025 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. Recursive Deep Models for Semantic Compositionality Over a Sentiment ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Han Shi, Jiahui Gao, Hang Xu, Xiaodan Liang, Zhenguo Li, Lingpeng Kong, Stephen M. S. Lee, and James Tin-Yau Kwok. Revisiting Over-Smoothing in BERT From the Perspective of Graph. International Conference on Learning Representations, 2022. ', 'modified_lines': '', 'original_lines': ' 12 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-06 15:23:28
ICLR.cc/2025/Conference
i3dXbS15ys
t38MffFVLa
[{'section': '4 BERTScoreRec.', 'after_section': None, 'context_after': 'Pθt(y+ t−1|x), with which the local model will exhibit a strong watermark resistance ability. When λ1 increases, LoRD will tend to rely more on the guidance of yvic, resulting in a higher risk of introducing watermarks. In the case of λ1 = 1, the local model will converge to the victim ', 'paragraph_idx': 47, 'before_section': '4 BERTScoreRec.', 'context_before': 'When λ1 is small, the convergence of LoRD will substantially focus on maximizing ', 'modified_lines': 't−1|x)/Pθt(y− ', 'original_lines': 't−1|x)/Pθt(y− ', 'after_paragraph_idx': None, 'before_paragraph_idx': 47}, {'section': '2.2 LANGUAGE MODELING', 'after_section': None, 'context_after': '7 ', 'paragraph_idx': 16, 'before_section': None, 'context_before': 'From a global perspective, Lobj represents the exploration and the locality learning ability of LoRD, which can mitigate the influences of watermarks. On the other hand, Lreg ensures the stability of ', 'modified_lines': 'the training procedure. Therefore, L characterizes a trade-off via λ1 between the stability and the diversity during stealing, and Equation 11 can be seen as a special case of L with λ1 = 0.5. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '5 EXPERIMENTS ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2025 ', 'modified_lines': '', 'original_lines': ' 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 the training procedure. Therefore, L characterizes a trade-off via λ1 between the stability and the diversity during stealing, and Equation 11 can be seen as a special case of L with λ1 = 0.5. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 EXPERIMENTS', 'after_section': '5 EXPERIMENTS', 'context_after': '(·|x), y0 ∼ Pθ0 (·|x), and yvic ∼ Pθvic(·|x) denote the sampled responses from the trained local model (θNt), the initial local model (θ0), and the victim model (θvic), respectively. In Figure 5, we illustrate a “spectrum” of extracting various downstream tasks based on these two metrics defined in Equation 12. The figure can assist in recognizing and defending commercial LLM’s knowledge. From Figure 5, we observe five tasks forming the following three scenario groups and datasets coming from the same tasks are mostly in the same group: ', 'paragraph_idx': 54, 'before_section': None, 'context_before': 'x,y∈Dte (cid:80) x,y∈Dte ', 'modified_lines': 'where yNt ∼ PθNt M(yNt, y) M(yNt, y) M(yvic, y) M(y0, y) , P = F = (12) , ', 'original_lines': ' M(yNt, y) M(yNt, y) M(yvic, y) M(y0, y) , P = F = (12) , where yNt ∼ PθNt ', 'after_paragraph_idx': 54, 'before_paragraph_idx': None}, {'section': '5 EXPERIMENTS', 'after_section': None, 'context_after': '9 ', 'paragraph_idx': 57, 'before_section': '5 EXPERIMENTS', 'context_before': '• High fidelity but low performance-up (HFLP). The initial local model already achieves a comparable performance to the victim model. QAs and summarization are in this group. ', 'modified_lines': '• Low fidelity but high performance-up (LFHP). While MEAs significantly improve the local model’s performance, gaps between the local and victim models remain difficult to bridge with domain-specific extraction alone. Machine translation is a representative task whose reasons are explained in Section 5.2. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 57}, {'section': 'Abstract', 'after_section': None, 'context_after': '5.3 RESISTANCE TO WATERMARKS ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'HFLPHFHPLFHP Under review as a conference paper at ICLR 2025 ', 'modified_lines': '', 'original_lines': ' • Low fidelity but high performance-up (LFHP). While MEAs significantly improve the local model’s performance, gaps between the local and victim models remain difficult to bridge with domain-specific extraction alone. Machine translation is a representative task whose reasons are explained in Section 5.2. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 THEORETICAL ANALYSIS', 'after_section': None, 'context_after': '6 CONCLUSION ', 'paragraph_idx': 33, 'before_section': None, 'context_before': 'LoRD, and demonstrate how LoRD’s performance varies with different values of λ1. The Z-score of LoRD witnesses a consistent increase as λ1 arises, indicating that the “confidence” in rejecting the hypothesis, i.e., the risk to be suspected, arises when λ1 increases. This finding coincides with ', 'modified_lines': 'the analysis in Section 4. However, λ1 = 0 is a abnormal point in WMT (de-en), which might be because it disables the regularization term of LoRD’s loss function. For tasks the local model does not own enough enough knowledge, it will lead to a significant performance degradation. Besides, we observe that the P-values of LoRD are generally higher than those of MLE when λ1 is below 0.8, indicating that LoRD typically exhibits stronger watermarking resistance than MLE in most situations. It is noteworthy that this enhanced resistance seems not a “tax” of MEAs efficacy, as the Rouge-L (F1) scores of LoRD consistently surpass those of MLE and do not exhibit a significant negative correlation with their P-values. ', 'original_lines': 'the analysis in Section 4. Besides, we observe that the P-values of LoRD are generally higher than those of MLE when λ1 is below 0.8, indicating that LoRD typically exhibits stronger watermarking resistance than MLE in most situations. It is noteworthy that this enhanced resistance seems not a “tax” of MEAs efficacy, as the Rouge-L (F1) scores of LoRD consistently surpass those of MLE and do not exhibit a significant negative correlation with their P-values. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-28 12:02:50
ICLR.cc/2025/Conference
eVOnLsUH43
vhx2B4gPOs
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'of expert routing. Due to the frequent and voluminous data exchanges, All-to-All communication has become a notable challenge to training efficiency. In this paper, we manage to accelerate All-to-All communication in MoE models ', 'modified_lines': 'from the training sample perspective, which is unexplored so far. In particular, we put forward the observation that tokens in the same training sample have cer- tain levels of locality in expert routing. Motivated by this, we develop NetMoE, which takes such locality into account and dynamically rearranges the placement of training samples to minimize All-to-All communication costs. Specifically, we model the All-to-All communication given the sample placement and formulate an integer programming problem to deduce the optimal placement in polynomial time. Experiments with 32 GPUs show that NetMoE achieves a maximum effi- ciency improvement of 1.67× compared with current MoE training frameworks. ', 'original_lines': 'from the training sample perspective, which is unexplored so far. In particular, we put forward the observation that tokens in the same training sample have certain levels of locality in expert routing. Motivated by this, we develop NetMoE, which takes such locality into account and dynamically rearranges the placement of train- ing samples to minimize All-to-All communication costs. Specifically, we model the All-to-All communication given the sample placement and formulate an inte- ger programming problem to deduce the optimal placement in polynomial time. Experiments with 32 GPUs show that NetMoE achieves a maximum efficiency improvement of 1.67× compared with state-of-the-art MoE training frameworks. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'out increasing the computational cost. Com- bining MoE with Transformer-based models can yield outstanding performance across var- ious tasks, including natural language process- ing (Lepikhin et al., 2021; Fedus et al., 2022), 1 MHAExpert0gatingExpert1outputinput0Device0…MHAExpert2gatingExpert3outputinput1Device1MHAExpertE-2gatingExpertE-1outputinputJ-1DeviceJ-1All-to-AllScatterAll-to-AllGatherDataParallelismModelParallelism Figure 2: An example of sample exchange. The figure illustrates the All-to-All gather operation in a MoE layer with two nodes, each containing two devices, and each device having one expert. Different colors represent ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'In recent years, large language models (LLMs) have shown impressive performance in lan- guage understanding and generation (OpenAI, ', 'modified_lines': '2023; Touvron et al., 2023; Zhou et al., 2024; Dubey et al., 2024; Shao et al., 2024; Zhang et al., 2024a) due to the increasing model size. However, larger models often come with greater computational costs. To address this, Mixture of Experts (MoE) models have been in- troduced to expand the model size greatly with- computer vision (Riquelme et al., 2021; Liang et al., 2022), recommendation systems (Tang et al., 2020; Zou et al., 2022), and speech recognition (You et al., 2022; Kwon & Chung, 2023). Figure 1: An example of expert parallelism applied to an MoE model with J devices and E = 2J experts (each device has two different experts). MoE models often replace the feed-forward network (FFN) layer with the MoE layer, which con- sists of a gating network and several small FFNs, representing different experts. In the MoE layer, each token is routed by the gating network to only a few selected experts, and the final output is Published as a conference paper at ICLR 2025 ', 'original_lines': '2023; Touvron et al., 2023) due to the increas- ing model size. However, larger models often come with greater computational costs, making further scaling difficult. To address this, Mix- ture of Experts (MoE) models have been intro- duced to expand the model size greatly with- computer vision (Riquelme et al., 2021; Liang et al., 2022), recommendation systems (Tang et al., 2020; Zou et al., 2022), and speech recog- nition (You et al., 2022; Kwon & Chung, 2023). Figure 1: An example of expert parallelism ap- plied to an MoE model with J devices and E = 2J experts (each device has two different experts). MoE models often replace the feed-forward network (FFN) layer with the MoE layer, which consists of a gating network and several small FFNs, representing different experts. In the MoE layer, each token is routed by the gating network to only a few selected experts, and the final output is obtained by a weighted sum of the computations from the selected experts. By such means, we can increase the number of experts to expand the model size for better performance, while keeping the computation complexity constant. 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 Under review as a conference paper at ICLR 2025 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 ', 'after_paragraph_idx': 3, 'before_paragraph_idx': 3}, {'section': '3.1 PROBLEM FORMULATION', 'after_section': None, 'context_after': 'Despite the above benefit, given the potentially large number of experts, the memory capacity of a single device is often insufficient. As a result, expert parallelism (Lepikhin et al., 2021; Fedus et al., ', 'paragraph_idx': 28, 'before_section': None, 'context_before': 'each node is 5 tokens. Fig. 2(c) displays the All-to-All gather operation after sample placement adjustment is enabled — the positions of samples on the devices change (samples 0 and 3 are exchanged), reducing the inter-node communication volume to 2 tokens per node. ', 'modified_lines': ' obtained by a weighted sum of the computations from the selected experts. By such means, we can increase the number of experts to expand the model size for better performance, while keeping the computation complexity constant. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': '• We dissect the problem into two stages and develop a polynomial-time solution to efficiently derive the sample placement during training. 2 PRELIMINARY 2.1 PARALLELISM IN DISTRIBUTED TRAINING Expert Parallelism: As shown in Fig. 1, expert parallelism (Lepikhin et al., 2021; Fedus et al., 2022) can be regarded as combining model parallelism and data parallelism. It distributes expert ', 'paragraph_idx': 9, 'before_section': '1 INTRODUCTION', 'context_before': '• We formulate the dynamic sample placement problem as a combinatorial optimization problem, which aims to find the best sample placement that maximizes efficiency given the expert routing. ', 'modified_lines': ' • We conduct experiments with various models on 32 NVIDIA A800 GPUs. Results show that NetMoE outperforms current MoE training systems by up to 1.67× in terms of training efficiency. Data and Model Parallelism: In data parallelism (Li et al., 2020; Sergeev & Balso, 2018; Wang et al., 2023; Zhang et al., 2024b), each device maintains a complete copy of the model parameters, while different training samples are assigned to each device. After the backward computation is completed, the model gradients from all devices are aggregated before updating the model param- eters. In model parallelism (Narayanan et al., 2021b; Huang et al., 2019; Narayanan et al., 2021a; Guan et al., 2024), model parameters are distributed across multiple devices, with each device re- sponsible for only a portion of the model. Communication operations are necessary to transmit the intermediate results (a.k.a. forward activations and their backward gradients) to accomplish the forward and backward propagation. ', 'original_lines': '• We conduct experiments with various models on 32 NVIDIA A800 GPUs. Experimental results show that NetMoE outperforms state-of-the-art MoE training systems by up to 1.67× in terms of training efficiency. Data and Model Parallelism: In data parallelism (Li et al., 2020; Sergeev & Balso, 2018), each device maintains a complete copy of the model parameters, while different training samples are assigned to each device. After the backward computation is completed, the model gradients from all devices are aggregated before updating the model parameters. In model parallelism (Narayanan et al., 2021b; Huang et al., 2019; Narayanan et al., 2021a), model parameters are distributed across multiple devices, with each device responsible for only a portion of the model. Communication operations are necessary to transmit the intermediate results (a.k.a. forward activations and their backward gradients) to accomplish the forward and backward propagation. ', 'after_paragraph_idx': 9, 'before_paragraph_idx': 9}, {'section': '2.2 DISTRIBUTED TRAINING ACCELERATION TECHNIQUES FOR MOE MODELS', 'after_section': None, 'context_after': '3 NETMOE In this section, we introduce NetMoE, a novel framework designed to optimize distributed 4 Two-StageDissectionProblemFormulationILPProblem(Eq.5)§3.2Polynomial-timeSolverImplementationResidualInliningOffloadingSolverFFNFFNAddAddScatterScatterFFNSolver§3.1§3.2§3.31stStage2ndStage ILP(Eq.6)↓(0,1)-ILP(Eq.10)↓BipartiteGraphILP(Eq.7)↓(0,1)-ILP(Eq.11)↓BipartiteGraphN nodes…1stStage2ndStage…2ndStage Table 1: Notations used throughout this work. We assume I is divisible by J, and J is divisible by N , which are common in distributed training. ', 'paragraph_idx': 15, 'before_section': '2.2 DISTRIBUTED TRAINING ACCELERATION TECHNIQUES FOR MOE MODELS', 'context_before': 'MoE (Cai et al., 2024) proposes feeding the output of the current attention layer directly into the next MoE layer, enabling parallel forward propagation with the current MLP layer in order to fully overlap All-to-All communication with computation. Although these methods improve training ef- ', 'modified_lines': 'ficiency, they inevitably impact model convergence. When applying these methods, we usually need to run numerous trials to tune the hyper- parameters, such as adjusting the weight of the topology-aware routing loss (Chen et al., 2022) or tuning the hyper-parameters for differ- ent communication channels (Zeng & Xiong, 2023). Given that each trial of LLM training can take days or even months, their utility is inevitably hampered. In contrast, our work fo- cuses on how to accelerate All-to-All commu- nication without affecting model convergence. Figure 3: The overview of the method of NetMoE. training for MoE models by considering both data and network locality. Given a target MoE model and the hardware environment, NetMoE aims to minimize the All-to-All communication cost. Its core innovation lies in optimizing the placement of samples within each MoE layer to maximize the utilization of faster intra-node bandwidth, thereby reducing the communication volume over slower inter-node connections. Specifically, NetMoE swaps the samples across devices during each MoE layer, enabling more tokens to communicate within the node during All-to-All communication. Published as a conference paper at ICLR 2025 ', 'original_lines': 'ficiency, they inevitably impact model convergence. When applying these methods, we usually need to run numerous trials to tune the hyper-parameters, such as adjusting the weight of the topology- aware routing loss (Chen et al., 2022) or tuning the hyper-parameters for different communication channels (Zeng & Xiong, 2023). Given that each trial of LLM training can take days or even months, their utility is inevitably hampered. In contrast, our work focuses on how to accelerate All-to-All communication without affecting model convergence. training for MoE models by considering both data and network locality. Given a target MoE model and the hardware environment, NetMoE aims to minimize the All-to-All communica- tion cost. Its core innovation lies in optimiz- ing the placement of samples within each MoE layer to maximize the utilization of faster intra- node bandwidth, thereby reducing the commu- nication volume over slower inter-node connec- tions. Specifically, NetMoE swaps the samples across devices during each MoE layer, enabling more tokens to communicate within the node during All-to-All communication. Figure 3: The overview of the method of NetMoE. Fig. 3 illustrates the overview of this section. We begin by introducing the modeling of All-to-All communication in MoE training and formulate our optimization problem in §3.1. We then illustrate how to solve the problem in §3.2, with the detailed algorithm shown in Alg. 1. We also present our implementation details in §3.3. For clarity, the frequently used notations are listed in Table 1. 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 15}, {'section': '3 NETMOE', 'after_section': None, 'context_after': '3.1 PROBLEM FORMULATION ', 'paragraph_idx': 23, 'before_section': None, 'context_before': '∼2TB/s 400GB/s 100GB/s ', 'modified_lines': ' Fig. 3 illustrates the overview of this section. We begin by introducing the modeling of All-to-All communication in MoE training and formulate our optimization problem in §3.1. We then illustrate how to solve the problem in §3.2, with the detailed algorithm shown in Alg. 1. We also present our implementation details in §3.3. For clarity, the frequently used notations are listed in Table 1. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.2 PROBLEM SOLVING', 'after_section': '3.2 PROBLEM SOLVING', 'context_after': 'connections, the most time-consuming term is usually the inter-node one. Therefore, we propose a two-stage solving strategy: the first stage optimizes tinter at the global scale, while the second stage minimizes tintra within each node, without affecting tinter. Formally, suppose there are N nodes and each node consists of J/N devices, then the optimization formula of the first stage can be written as the following integer linear programming (ILP) problem: Node(SmpDev(i))∈ ', 'paragraph_idx': 33, 'before_section': '3.2 PROBLEM SOLVING', 'context_before': 'Two-Stage Dissection: Although Eq. 1 takes the maximum value of the two kinds of communica- tion cost, in practice, due to the significant bandwidth difference between the inter- and intra-node ', 'modified_lines': ' 1Our work is fully compatible with the dynamic expert placement technique. Specifically, in the problem formulation and solving of NetMoE, we do not make any assumption on the expert placement. Instead, it is treated as an input. Thus, we can dynamically adjust the expert placement like previous works, and NetMoE can still deduce the optimal sample placement. We would like to leave the combination as our future work. 6 Published as a conference paper at ICLR 2025 (a) Sample placement after the first stage. (b) Sample placement after the second stage. Figure 4: An example of the second stage optimization. Fig. 4(a) shows the MoE layer in N ode1 after the first stage optimization in Fig. 2(c). By applying the second stage optimization within the node, the intra-node communication can be reduced by 1 token (by swapping sample0 and sample2), as shown in Fig. 4(b). ', 'original_lines': ' 1Our work is fully compatible with the dynamic expert placement technique. Specifically, in the problem formulation and solving of NetMoE, we do not make any assumption on the expert placement. Instead, it is treated as an input. Thus, we can dynamically adjust the expert placement like previous works, and NetMoE can still deduce the optimal sample placement. We would like to leave the combination as our future work. 6 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 Under review as a conference paper at ICLR 2025 (a) Sample placement after the first stage. (b) Sample placement after the second stage. Figure 4: An example of the second stage optimization. Fig. 4(a) shows the MoE layer in N ode1 after the first stage optimization in Fig. 2(c). By applying the second stage optimization within the node, the intra-node communication can be reduced by 1 token (by swapping sample0 and sample2), as shown in Fig. 4(b). ', 'after_paragraph_idx': 34, 'before_paragraph_idx': 33}, {'section': '3.1 PROBLEM FORMULATION', 'after_section': None, 'context_after': '∧ Node(j) = n} be the set of experts reside on it ( J (cid:74) ∗ n ⊆ (cid:75) SmpDev(i)∈ ', 'paragraph_idx': 30, 'before_section': None, 'context_before': '(cid:74) device placement rather than obtained by Eq. 6). Then, to optimize for the n-th node, we should solve the following ILP problem: ', 'modified_lines': 't(l,gather) intra I[SmpDev(i) = j] = I/J for j ∈ + t(l+1,scatter) (cid:88) intra s.t. I I (cid:75) (cid:75) (cid:74) (cid:75) (cid:75) (cid:75) (cid:74) J (cid:74) n (7) ', 'original_lines': 'I I (cid:75) (cid:74) (cid:75) (cid:75) (cid:74) (cid:75) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.1 PROBLEM FORMULATION', 'after_section': None, 'context_after': '∗ n i∈ I (cid:74) ∗ n Specifically, Fig. 2 can be regarded as the optimization of the first stage, while Fig. 4 demonstrates the second stage of optimization built upon it. Although the second stage consists of N ILP prob- ', 'paragraph_idx': 28, 'before_section': '3.1 PROBLEM FORMULATION', 'context_before': 'I ', 'modified_lines': '(cid:75) (cid:74) (cid:75) ', 'original_lines': '(cid:74) t(l,gather) intra + t(l+1,scatter) intra s.t. (cid:75) (cid:88) (cid:75) I[SmpDev(i) = j] = I/J for j ∈ J (cid:74) n (7) (cid:75) ', 'after_paragraph_idx': None, 'before_paragraph_idx': 28}, {'section': '3.2 PROBLEM SOLVING', 'after_section': '3.2 PROBLEM SOLVING', 'context_after': 'ci,n = ', 'paragraph_idx': 38, 'before_section': '3.2 PROBLEM SOLVING', 'context_before': 'We first introduce how to transform the ILP problems into weighted bipartite matching problems. Let ci,n and c′ ', 'modified_lines': 'i,j represent the inter- and intra-node communication volume when placing the i-th sample on the j-th device in the n-th node, They can be calculated using the following formulas: ', 'original_lines': 'i,j represent the inter- and intra-node communication volume generated by placing the i-th sample on the j-th device that resides in the n-th node, They can be calculated using the following formulas: ', 'after_paragraph_idx': 38, 'before_paragraph_idx': 38}, {'section': '7 i∈', 'after_section': '7 i∈', 'context_after': 'Get route from the gating network and calculate num via Eq. 2 Invoke Solve(num) in a background thread Get input from All-to-All scatter ', 'paragraph_idx': 42, 'before_section': '7 i∈', 'context_before': 'else ', 'modified_lines': 'Get c, c′ via Eq. 8 and build bipartite graphs Get the optimal solution p∗ via the Kuhn-Munkres (KM) algorithm return the optimal sample placement according to p∗ ', 'original_lines': '', 'after_paragraph_idx': 42, 'before_paragraph_idx': 42}, {'section': '7 i∈', 'after_section': '7 i∈', 'context_after': '3.3 IMPLEMENTATION NetMoE is implemented on top of PyTorch (Paszke et al., 2019), with custom operations (e.g., the calculation of num, c, c′, and the KM algorithm) implemented in C++ and CUDA. The complete ', 'paragraph_idx': 42, 'before_section': '7 i∈', 'context_before': 'weight perfect matching in this bipartite graph, which can be ef- ficiently solved to optimality in polynomial time using the Kuhn- Munkres (KM) algorithm. Fig. 5 illustrates an example of con- ', 'modified_lines': 'structing a bipartite graph during the first stage in Fig. 2. The graph nodes on the left represent set P , and the graph nodes on the right represent set Q . Each pair of graph nodes is connected by a weighted edge, depicted by a dotted line. The red edges indicate the final matching scheme, where the total weight of all matched edges is minimized. Figure 5: An example of a bi- partite graph. i,⌊n/B⌋ ', 'original_lines': 'structing a bipartite graph during the first stage in Fig. 2. The graph nodes on the left represent set P , and the graph nodes on the right represent set Q . Each pair of graph nodes is connected by a weighted edge, depicted by a dotted line. The red edges indicate the final matching scheme, where the total weight of all matched edges is minimized. i,⌊n/B⌋ Figure 5: An example of a bi- partite. ', 'after_paragraph_idx': 42, 'before_paragraph_idx': 42}, {'section': 'Abstract', 'after_section': None, 'context_after': '8 SamplesNodes012300111token2tokens1token0tokenduplicateduplicate Table 3: Configurations of the evaluated models. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'MoE layers. However, in NetMoE, the position of the training data changes after the All-to-All 2The problems of the second stage (Eq. 7) can also be transformed into (0,1)-ILP problems and solved in ', 'modified_lines': ' polynomial time similarly. We omit them here due to the space constraint and only discuss the first stage. Published as a conference paper at ICLR 2025 ', 'original_lines': 'polynomial time similarly. We omit them in the main text due to the space constraint and only discuss the first stage. Under review as a conference paper at ICLR 2025 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '7 i∈', 'after_section': None, 'context_after': 'Offloading Solving Process: The KM algorithm is hard to parallelize, making it unsuitable for highly parallelized accelerators like GPUs, so we perform the solving process on the CPU. As shown 4 EXPERIMENTS ', 'paragraph_idx': 47, 'before_section': '7 i∈', 'context_before': 'gather operation, while the samples in the residual connections remain in their original positions. To ensure the correctness of the model, we inline the residual connections into the expert computation, as shown in line 12 of Alg. 1. This optimization ensures consistency in model accuracy before and ', 'modified_lines': 'after applying the algorithm. More details about inlining is elaborated in Appendix A.1. in line 9 of Alg. 1, after obtaining the routing results for the current layer, each device calculates and transfers num to the CPU memory. The routing results for the next layer, required by the optimization algorithm, can be predicted by directly passing the current layer’s input to the router of the next layer (Eliseev & Mazur, 2023; Tang et al., 2024). The solving process only needs to provide the new sample positions before the All-to-All gather operation. In this way, the solving process can be overlapped with the All-to-All scatter and expert computation. As we will show in §4.4, the solving time is fully hidden and thus introduces zero overhead. More discussion of algorithm selection and overlap potential is described in Appendix A.2. ', 'original_lines': 'after applying the algorithm. More details about inlining is elaborated in Appendix A. in line 9 of Alg. 1, after obtaining the routing results for the current layer, each device calculates and transfers num to the CPU memory. The routing results for the next layer, required by the optimiza- tion algorithm, can be predicted through routing frequency statistics (Nie et al., 2023; Zhai et al., 2023; Huang et al., 2023; Eliseev & Mazur, 2023). The solving process only needs to provide the new sample positions before the All-to-All gather operation. In this way, the solving process can be overlapped with the All-to-All scatter and expert computation. As we will show in §4.4, the solving time is fully hidden and thus introduces zero overhead. Although the time complexity of the KM algorithm is O(I 3), the current training process commonly employs gradient accumulation (Ten- sorflow, 2019; Pytorch, 2019) due to the limited GPU memory. Thus, the value of I is typically confined to an acceptable size, ensuring that the solving time can be effectively overlapped. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 47}, {'section': '7 i∈', 'after_section': None, 'context_after': 'In Appendix C, we have provided more experimental results to analyze the acceleration of All-to-All communication. 4.4 SOLVER PERFORMANCE ', 'paragraph_idx': 50, 'before_section': None, 'context_before': 'theoretical optimization values provided by the solver. It can be seen that the actual speedup in All- to-All communication is slightly less than the theoretical values. This discrepancy is reasonable, as our modeling of All-to-All communication assumes ideal conditions and does not account for ', 'modified_lines': 'potential routing conflicts or hardware-induced errors. ', 'original_lines': 'potential routing conflicts or hardware-induced errors. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Table 5: Summary of communication volume and proportion of sequence adjustment. For commu- nication volume, we provide the intra-node and inter-node communication volumes before and after applying NetMoE, with the increase or reduction given in parentheses. For the proportion of se- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'observed that the routing distribution changes during the model training process. However, NetMoE consistently reduces the inter-node communication by adjusting the sample placement given the dy- namic distributions. Consequently, the effectiveness of NetMoE is robust to the routing distribution. ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-02-28 04:34:36
ICLR.cc/2025/Conference
8EQnlPhvic
YLpNaeGI7t
[]
2025-03-01 20:24:54
ICLR.cc/2025/Conference
YLpNaeGI7t
7NiJUeRQpG
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '1 Published as a conference paper at ICLR 2025 real-world datasets are typically high-dimensional but often exhibit low intrinsic dimensionality (Facco et al., 2017; Spigler et al., 2020). This gap between theoretical data assumptions and actual data distributions highlights the need for more sophisticated analytical frameworks that capture the ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': '2024; Dandi et al., 2023a; Cui et al., 2024), and occasionally spiked covariance models (Ba et al., 2023; Mousavi-Hosseini et al., 2023). While these assumptions simplify analysis and provide in- sights, they overlook the complex nature of real-world data. In practical learning tasks, data is often ', 'modified_lines': 'better represented as a mixture of distributions (Seddik et al., 2020; Dandi et al., 2023b). Moreover, ', 'original_lines': 'better represented as a mixture of distributions (Seddik et al., 2020; Dandi et al., 2023b). Moreover, ', 'after_paragraph_idx': None, 'before_paragraph_idx': 5}, {'section': 'Abstract', 'after_section': None, 'context_after': 'By leveraging recent advancements in Gaussian universality, we provide a comprehensive characterization of training and generalization errors in the asymp- limit, where the input dimension, number of hidden neurons, and totically proportional ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'In this work, we study the performance of a two-layer neural network trained with a single gradient descent step under Gaussian mixture data with covariances including low-dimensional structure. Our data model captures both the mixture nature and intrinsic low-dimensionality ', 'modified_lines': 'in real-world datasets. ', 'original_lines': 'in real-world datasets. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'the notation f (·) ≍ g(·) to indicate that the functions f and g are of the same order with respect to the parameters k, n, and m. The notation O(·) represents the big-oh notation in relation to these parameters, and we also define ˜O(f (·)) as shorthand for O(f (·) polylog k), effectively allowing us ', 'paragraph_idx': 8, 'before_section': '1 INTRODUCTION', 'context_before': 'and highlight the significant impact of the structure of data on learning outcomes. Notations We adopt the standard notations established by Goodfellow et al. (2016) throughout ', 'modified_lines': 'this work unless specified otherwise. The spectral norm of a matrix F is denoted as ∥F ∥. We use ', 'original_lines': 'this work, unless specified otherwise. The spectral norm of a matrix F is denoted as ∥F ∥. We use ', 'after_paragraph_idx': 8, 'before_paragraph_idx': 7}, {'section': '2 SETTING', 'after_section': None, 'context_after': '1 √ ', 'paragraph_idx': 9, 'before_section': None, 'context_before': '2 SETTING ', 'modified_lines': 'We consider a supervised learning setup through a two-layer neural network (NN) defined by ', 'original_lines': 'We consider supervised learning setup through a two-layer neural network (NN) defined by ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 SETTING', 'after_section': '2 SETTING', 'context_after': 'plus identity (see assumption (A.4) in Section 4). We then define Σ := Cov(x) as the covariance matrix of the input vector x, and note that its spectral norm ∥Σ∥ will be the measure of data spread in our context. Finally, σ∗ : R2 → R is an unknown label generation function that can be a nonlinear ', 'paragraph_idx': 9, 'before_section': None, 'context_before': 'Published as a conference paper at ICLR 2025 ', 'modified_lines': 'µj ∈ Rn and Σj ∈ Rn×n denote the mean and covariance of j-th component, respectively. We fur- ther assume that Σj exhibits a certain low dimensional structure that can be described as finite-rank ', 'original_lines': 'µj ∈ Rn and Σj ∈ Rn×n denote the mean and covariance of j-th component, respectively. We further assume that Σj exhibits certain low dimensional structure that can be described as finite-rank ', 'after_paragraph_idx': 9, 'before_paragraph_idx': None}, {'section': '3 RELATED WORK', 'after_section': '3 RELATED WORK', 'context_after': 'mations of two-layer neural networks (Ghorbani et al., 2020; 2021). Despite their simplicity, RFMs have proven instrumental in understanding various facets of machine learning, including general- ization (Mei & Montanari, 2022), transfer learning (Tripuraneni et al., 2021), out-of-distribution ', 'paragraph_idx': 15, 'before_section': '3 RELATED WORK', 'context_before': 'Random features — The random feature model (RFM) was initially proposed as a computation- ally efficient approximation to kernel methods (Rahimi & Recht, 2007). RFMs are closely related ', 'modified_lines': 'to the Neural Tangent Kernel (NTK) (Jacot et al., 2018) since both of them provide linear approxi- ', 'original_lines': 'to the Neural Tangent Kernel (NTK) (Jacot et al., 2018) since both of them provides linear approxi- ', 'after_paragraph_idx': 15, 'before_paragraph_idx': 15}, {'section': '3 RELATED WORK', 'after_section': '3 RELATED WORK', 'context_after': 'highlighting the significance of structured data in RFM applications. Feature learning — While the existing literature on random features and universality provides ', 'paragraph_idx': 16, 'before_section': '3 RELATED WORK', 'context_before': 'risk minimization, allowing for broader analyses beyond RFMs (Montanari & Saeed, 2022). While the results by Montanari & Saeed (2022) focused on covariate inputs, Dandi et al. (2023b) recently broadened this framework to encompass inputs distributed as mixtures. Furthermore, Demir & Do- ', 'modified_lines': 'gan (2024) extended the universality of random features to Gaussian inputs with spiked covariance, ', 'original_lines': 'gan (2024) extended the universality of random feature to Gaussian inputs with spiked covariance, ', 'after_paragraph_idx': 16, 'before_paragraph_idx': 16}, {'section': '3 RELATED WORK', 'after_section': '3 RELATED WORK', 'context_after': 'Gaussian mixtures — Most of the works discussed in this section, with the exception of Dandi et al. (2023b), have assumed Gaussian or spherical inputs, which do not adequately capture the ', 'paragraph_idx': 17, 'before_section': '3 RELATED WORK', 'context_before': 'studies, our approach considers Gaussian mixture inputs as defined in equation (2), allowing us to investigate the intriguing effects of data distribution on feature learning. While Ba et al. (2023) and Mousavi-Hosseini et al. (2023) examined Gaussian inputs with spiked covariance, their findings ', 'modified_lines': 'lack equivalent models for precise performance characterization and also lack mixture aspect of our data model (2), highlighting the novelty and significance of our work in this area. ', 'original_lines': 'lack equivalent models for precise performance characterization and also lacks mixture aspect of our data model (2), highlighting the novelty and significance of our work in this area. ', 'after_paragraph_idx': 17, 'before_paragraph_idx': 17}, {'section': '6 (13)', 'after_section': None, 'context_after': '(cid:12) (cid:12) c, κc ', 'paragraph_idx': 28, 'before_section': None, 'context_before': '(cid:104) ', 'modified_lines': 'σ( ˆF x)(z⊥)T (cid:12) ', 'original_lines': 'σ( ˆF x)z⊥ (cid:12) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6 (13)', 'after_section': '6 (13)', 'context_after': 'Proof. Appendix B. Recall that c denotes the index of the Gaussian component in the input mixture, while κc represents ', 'paragraph_idx': 28, 'before_section': '6 (13)', 'context_before': '(ii) the corresponding generalization errors G also converge in probability to the same value if ', 'modified_lines': 'an additional assumption (A.9) provided in Appendix B hold. ', 'original_lines': 'an additional assumptions (A.9) provided in Appendix B hold. ', 'after_paragraph_idx': 29, 'before_paragraph_idx': 28}, {'section': '6 (13)', 'after_section': '6 (13)', 'context_after': 'Proof. Appendix C. ', 'paragraph_idx': 28, 'before_section': '6 (13)', 'context_before': '(ii) the corresponding generalization errors G also converge in probability to the same value if ', 'modified_lines': 'an additional assumption (A.9) provided in Appendix B hold. ', 'original_lines': 'an additional assumptions (A.9) provided in Appendix B hold. ', 'after_paragraph_idx': 29, 'before_paragraph_idx': 28}, {'section': '12 m(cid:88)', 'after_section': None, 'context_after': 'errors for both models also match closely. However, since generalization performance is of greater interest than training performance, we will concentrate on generalization errors in the subsequent plots. It is worth noting that our remaining simulation results are presented for the case of k/m = 1, ', 'paragraph_idx': 14, 'before_section': None, 'context_before': 'Effect of model complexity — Figure 1a demonstrates that the generalization errors of both the neural network and the equivalent Hermite model closely align across all values of k/m, reinforcing ', 'modified_lines': 'our theoretical findings. Supporting this, Figure 6 (given in Appendix D.3) reveals that the training ', 'original_lines': 'our theoretical findings. Supporting this, Figure 6 (found in Appendix D.3) reveals that the training ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6 SIMULATION RESULTS AND DISCUSSION', 'after_section': None, 'context_after': 'Collectively, the results in Figures 1 and 2 confirm that the generalization performances of neural networks align closely with those of the equivalent Hermite model, underscoring the strength of our ', 'paragraph_idx': 34, 'before_section': '6 SIMULATION RESULTS AND DISCUSSION', 'context_before': 'This suggests that mixture data presents greater challenges than single Gaussian data. Finally, Figure 2c reveals that a higher effective rank (dc) of the covariance matrix leads to decreased generalization ', 'modified_lines': 'error. This occurs because a higher rank, on average, leads to a larger maximum eigenvalue due to our random sampling of eigenvalues for the covariance matrices. ', 'original_lines': 'error, as higher rank generally results in a larger average maximum eigenvalue due to our random sampling of eigenvalues from covariance matrices. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 34}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '1 mk ', 'paragraph_idx': 8, 'before_section': None, 'context_before': '(cid:17) ˜X (cid:125) ', 'modified_lines': '− ', 'original_lines': '+ ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 MAIN RESULTS', 'after_section': None, 'context_after': 'can used interchangeably in the bounds due to (A.1). Using (25), we get ∥σ′ ', 'paragraph_idx': 25, 'before_section': None, 'context_before': 'with high probability, due to concentration of norms of sub-Gaussian matrices (Vershynin, 2018, Theorem 4.4.5). The bound for ∥F ∥ is due to (A.6) and Tr(Σ) ≍ k by (A.4). The effect of the ', 'modified_lines': 'mixture (2) is handled by considering ˜X as a concatenation of Gaussian matrices. Note that k, n, m ', 'original_lines': 'mixture (2) is handled by considering ˜X as concatenation of Gaussian matrices. Note that k, n, m ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6 (13)', 'after_section': None, 'context_after': '(cid:104) (cid:12) (cid:12) c, κc ', 'paragraph_idx': 28, 'before_section': '6 (13)', 'context_before': 'Ψ(c, κc) := E ', 'modified_lines': 'σ( ˆF x)(z⊥)T (cid:12) ', 'original_lines': 'σ( ˆF x)z⊥ (cid:12) ', 'after_paragraph_idx': None, 'before_paragraph_idx': 28}, {'section': '5 MAIN RESULTS', 'after_section': None, 'context_after': 'gradient step to the Hermite model. First of all, recall the bulk+structure decomposition of ˆF x for the case of the input x|c on c-th Gaussian, ', 'paragraph_idx': 22, 'before_section': None, 'context_before': 'C PROOF OF THEOREM 4 ', 'modified_lines': 'In this section, we prove the equivalence of the two-layer neural network after training with one ', 'original_lines': 'In this section, we prove the equivalence of the two-layer neural network after trained with one ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-15 10:03:54
ICLR.cc/2025/Conference
7NiJUeRQpG
kfJUkb6mlS
[]
2025-03-15 10:14:00
ICLR.cc/2025/Conference
kfJUkb6mlS
ev7M33F7uT
[]
2025-03-15 10:44:46
ICLR.cc/2025/Conference
ev7M33F7uT
mOEcp24vz4
[{'section': '2 SETTING', 'after_section': None, 'context_after': 'm (cid:88) ', 'paragraph_idx': 10, 'before_section': None, 'context_before': 'T := 1 ', 'modified_lines': '2m ', 'original_lines': '2 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 SETTING', 'after_section': None, 'context_after': '1 √ k (cid:19)2 ', 'paragraph_idx': 12, 'before_section': '2 SETTING', 'context_before': 'i=1 yi − ', 'modified_lines': ' ˆwT σ( ˆF xi) ', 'original_lines': ' ˆwT σ( ˆF xi) ', 'after_paragraph_idx': None, 'before_paragraph_idx': 12}]
2025-05-17 16:19:27
ICLR.cc/2025/Conference
j0XVq4g6fL
BOVOjzXIMt
[{'section': 'Abstract', 'after_section': None, 'context_after': '10 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Thomas Decker, Ananta R Bhattarai, Jindong Gu, Volker Tresp, and Florian Buettner. Provably better explanations with optimized aggregation of feature attributions. In International Conference on Machine Learning, pp. 10267–10286. PMLR, 2024. ', 'modified_lines': '', 'original_lines': ' Amit Dhurandhar, Karthikeyan Natesan Ramamurthy, and Karthikeyan Shanmugam. Is this the right neighborhood? accurate and query efficient model agnostic explanations. Advances in Neural Information Processing Systems, 35:9499–9511, 2022. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '11 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Saliency strikes back: How filtering out high frequencies improves white-box explanations. In Proceedings of the 41st International Conference on Machine Learning, pp. 37041–37075. PMLR, 2024. ', 'modified_lines': '', 'original_lines': ' Ramin Okhrati and Aldo Lipani. A multilinear sampling algorithm to estimate shapley values. In 2020 25th International Conference on Pattern Recognition (ICPR), pp. 7992–7999. IEEE, 2021. Guillermo Owen. Multilinear extensions of games. Management Science, 18(5-part-2):64–79, 1972. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Proof of Theorem 2. Let zS be a query, the probability of sampling zS over the integration path is: ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '· (f (zS∪{i}) − f (zS)) = Shi where zS denotes a query with S being the set of indices corresponding to the present features. ', 'modified_lines': '', 'original_lines': ' 12 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '13 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Completeness requires the equivalence between the sum of allocated feature attributions and the difference in prediction results made by full feature presence as stated in equation 1. ', 'modified_lines': '', 'original_lines': ' Proof of Completeness. The contribution of a sample zS to attribution estimation in GEFA can be divided into two parts, the contribution with a positive sign wi∈S to the present features {xi|i ∈ S}, and the contribution with a negative sign wı /∈S to the absent features. According to equation 8, the contribution is computed by: wi∈S = f (zS) · 1 γ wi /∈S = −f (zS) · 1 1 − γ ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Definition 1. A function is said to be functionally independent of a feature if the prediction results are always the same for any sample pair that differs only in that feature. 14 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Insensitivity is also known as Dummy, which requires the attribution score to be zero for any feature on which the target model is not functionally dependent. Definition 1 formally describes functional independence. ', 'modified_lines': ' ', 'original_lines': ' Proof of Insensitivity. Let xi be the dummy feature, the proxy gradient estimator of that feature on the straightline path is: ηαi(x(γ · 1p)) = Eπ(ϵ|γ·1p)[f (ϵ x ⊕ ¯ϵ ˚x) · ( ϵi γ − ¯ϵi 1 − γ )] Using π(ϵ\\i|γ · 1p−1) as a shorthand for the feature value sampling process excluding the i-th feature, the expectation can be expanded to the following form due to the independent sampling processes of different features: ηαi(x(α)) = Eπ(ϵ\\i|γ·1p−1) (cid:104) Eπ(ϵi|γ)[f (ϵ x ⊕ ¯ϵ ˚x) · ( ϵi γ − ¯ϵi 1 − γ (cid:105) )] ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'It is not difficult to show that sampling of the two features following the same distribution given xi = xj and ˚xi = ˚xj, which induces: ηαi(x(γ · 1p)) = ηαj (x(γ · 1p)) 15 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Given the symmetry between xi and xj, the inner expectations satisfy: Eπ(ϵ\\i|γ·1p−1)[f (ϵ x ⊕ ¯ϵ ˚x)] = Eπ(ϵ\\j |γ·1p−1)[f (ϵ x ⊕ ¯ϵ ˚x)], when ϵi = ϵj ', 'modified_lines': '', 'original_lines': ' Integrating the estimators having the same outputs along the symmetric path concludes the proof by showing: ξi = (cid:90) 1 0 ηαi(x(γ · 1p)) dγ = (cid:90) 1 0 ηαj (x(γ · 1p)) dγ = ξj ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'ϵ∈{0,1}p:ϵi=0 (cid:18)p − 1 |ϵ| ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(Theorem 2) ', 'modified_lines': '', 'original_lines': '· |ϵ|!(p − |ϵ| − 1)! p! · (cid:16) h(|ϵ| + 1) − h(|ϵ|) (cid:17) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '16 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'i ) ', 'modified_lines': '', 'original_lines': ' The optimal variance reduction effect for ξi is achieved when: )/Var(ξ(h) β = Cov(ξi, ξ(h) (12) i Alternative to a feature-specific optimal value, we are also interested in a single value for β that maximizes the overall variance reduction effect. To acquire the overall optimum, we first expand the covariance in equation 12: ) i Cov(ξi, ξ(h) i ] − E[ξi] · E[ξ(h) ] i ) = E[ξi · ξ(h) (cid:104) = Eαi i (cid:105) Eϵi[f (z) · h(z) · (∇xi log π(ϵi|αi))2] − E[ξi] · 0 (Unbiasedness of ξ(h)) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.2 GRADIENT ESTIMATION UNDER A BLACK-BOX SETTING', 'after_section': None, 'context_after': 'ξIG i = zS ', 'paragraph_idx': 14, 'before_section': None, 'context_before': 'zS∪{i} in only the i-th feature along edges, the goal is simplified prove that they are calculators of the marginal contribution conditioned on the presence of features {xj|j ∈ S} for each segment of a path. For the i-th segment on an edge path with S denoting the preceding vertices, IG produces: ', 'modified_lines': ' (cid:90) zS∪{i} ', 'original_lines': '(cid:90) zS∪{i} ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '17 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ifi ∈ S ifi /∈ S ', 'modified_lines': '', 'original_lines': ' When following the same permutation order, GEFA produces the same marginal contribution as IG for the i-th segment: ξGEFA i = (cid:90) αS∪{i} αS Eπ(ϵi|αi)[f (z) · ( ϵi αi − ¯ϵi 1 − αi )] dα = f (zS∪{i}) − f (zS) ⇔ ξIG i Please note that, for GEFA, the only feature value in z that may vary during the sampling on the i-th segment is zi. The remaining features are deterministic as their corresponding proxy variables are either 0 or 1 depending on whether they are included in the preceding vertices S, namely to take either the baseline or explicand value with hundred percent probability. As both explainers deliver marginal contributions along edge paths, the claim in Theorem 4 becomes obvious as it describes the typical computation of Shapley Values. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'fine-tuning. Based on the fine-tuned EfficientNet-B0, we derived explanations for images in all three partitions with three competitors: IG, PSHAP, and GEFA. Features for each instance were ranked in descending order according to their attribution scores. Similar to the traditional deletion scheme, we ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'used for retraining the pre-trained model to assess the quality of explanations. Without loss of gener- ality, we downsampled the dataset into 2000/400/400 partitions for training, validation, and test sets for efficiency. EfficientNet-B0 achieved an accuracy of 99.40% on the downsampled dataset after ', 'modified_lines': '', 'original_lines': ' 5https://pytorch.org/vision/main/models/efficientnet.html 18 Under review as a conference paper at ICLR 2025 Table 3: Performance of explainers in different settings Competitors IG PSHAP GEFA Random : lower is better; ROAR 77.20 79.25 82.35 71.30 : higher is better In Accuracy (%) ROAR-abs 62.80 76.75 68.45 71.30 KEAR 89.60 84.30 89.95 71.30 nAOPC 40.82 39.56 40.79 35.07 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-26 19:26:03
ICLR.cc/2025/Conference
ydeFL3wT9B
E0hdGr2wXK
[]
2024-11-22 09:55:03
ICLR.cc/2025/Conference
E0hdGr2wXK
tXUVOUO2nU
[]
2024-11-22 10:24:25
ICLR.cc/2025/Conference
tXUVOUO2nU
NPizsmaPef
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'However, considering annotators may not be comprehensive in various types of knowledge back- To enable controllable chain-of-thought alignment in LLMs, we principally track LLMs’ decision- making process in generating chain-of-thought reasoning steps, by formulating the process as a Markov Decision Process (MDP) whose goal is to reach the correct final answer with minimal • We propose an offline evaluation framework, OCEAN, which bridges the heterogeneity be- • With the evaluation framework, we further develop a direct policy optimization method • We provide a theoretical analysis of the unbiasedness and establish a lower bound for the variance of our KG-IPS estimator. ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'is costly (Thomas et al., 2015; Bhargava et al., 2024), risky, and impractical (Yu et al., 2021). Re- cent studies in LLMs leverage human feedback to align models’ behaviors with human preferences in single-turn generation (Ouyang et al., 2022; Rafailov et al., 2024) and multi-step reasoning tasks ', 'modified_lines': '(Joshi et al., 2024). In addition, complicated LLM agentic frameworks, which involve multi-agent collaboration, orchestration, and cooperation, rely heavily on efficient Roucher et al. (2025); Wu et al. (2023a), robust Masterman et al. (2024); Nguyen et al. (2024a), trustworthy Yao et al. (2024), personalized Li et al. (2024c); Wu et al. (2024b); Zhang et al. (2024b), and proactive Yao et al. ∗These authors contributed equally to this work. 1 Published as a conference paper at ICLR 2025 (2023); Xia et al. (2025); Ma et al. (2023) chain-of-thought abilities, which needs to be finetuned offline Putta et al. (2024); Wu et al. (2025) before deploying them online. Due to the high cost of deploying LLMs online and interacting with human feedback, Bhargava et al. (2024) further enables offline evaluation of LLMs from logged human feedback to align LLMs’ response generation. grounds, human feedback on chain-of-thought reasoning (Joshi et al., 2024) can be more challeng- ing to collect. In addition, since chain-of-thought reasoning involves a sequential decision-making process, the volume of collected human feedback can be exponentially increased. Due to such chal- lenges, conventional reinforcement learning from human feedback (RLHF) methods (Ouyang et al., 2022; Bai et al., 2022a) can suffer from training inefficiencies and scalability issues. Motivated by recent works in using knowledge graphs (KGs) as side information to enable prompt engineering (Wang et al., 2024c; Xia et al., 2024b), self-correction (Zhao et al., 2023; Wang et al., 2023; Li et al., 2024b; Wu et al., 2024c), evaluating chain-of-thought (Nguyen et al., 2024b), and model fine-tuning (Wang et al., 2024b; Tang et al., 2024), we propose leveraging KGs as weak yet controllable knowledge reasoners to effectively measure the alignment between LLMs’ multi- step chain-of-thought reasoning and multi-hop KG trajectories by inverse propensity scores (IPS) (Joachims et al., 2017). In contrast to the existing chain-of-thought evaluation (Nguyen et al., 2024b) method, which relies on accurate chain-of-thought grounding on specific KG, we propose verbal- izing the KG trajectories and developing a KG policy that serves as a verbal reasoning mechanism over the graphs. Therefore, we can bridge the heterogeneity between KG and LLM reasoning forms, and the verbalized KG policy can be generalized to be compatible with various LLMs. knowledge exploration and exploitation Lissandrini et al. (2020b;a); Wu et al. (2024a). Then, we propose offline chain-of-thought evaluation and alignment, OCEAN, which evaluates the generated chain of thoughts from off-policy LLMs through collected offline data samples with feedback from a KG. The improved Knowledge Graph - Inverse Propensity Scores (KG-IPS) approach considers the effects of feedback from the KG policy that aligns the model’s chain-of-thought generation and the behavior policy, which prevents model degeneration. We prove that the KG-IPS estimator pro- vides an unbiased estimate of the target policy, with a lower bound for the variance, and establish confidence intervals using sub-Gaussian concentration inequalities. To enable direct optimization of LLM policies, we leverage the proposed KG-IPS policy evaluation approach for LLM fine-tuning by directly maximize estimated policy values through gradient descent. Then we empirically evaluate the optimized LLM policy on three types of chain-of-thought reasoning tasks, and demonstrate the effectiveness of the proposed policy optimization method. We also observe relative performance im- provements across evaluation tasks, without affecting LLMs’ generalizability or generation quality. We summarize our contributions as follows: tween LLM and KG reasoning, for effective evaluations of chain-of-thought. which enables efficient alignment with automatic feedback from the KG. • To facilitate the evaluation and optimization, we model the KG preference and derive feed- back by developing a policy which verbalizes KG trajectories. ', 'original_lines': '(Joshi et al., 2024). Due to the high cost of deploying LLMs online and interacting with human feedback, Bhargava et al. (2024) further enables offline evaluation of LLMs from logged human feedback to align LLMs’ response generation. grounds and the associated reasoning, human feedback on chain-of-thought reasoning (Joshi et al., 2024) can be more challenging to collect. In addition, since chain-of-thought reasoning involves a sequential decision-making process, the volume of collected human feedback can be exponen- tially increased. Due to such challenges, conventional reinforcement learning from human feedback (RLHF) methods (Ouyang et al., 2022; Bai et al., 2022a) can suffer from training inefficient and scalability issues. 1 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 Under review as a conference paper at ICLR 2025 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 Motivated by recent works in using knowledge graphs as side information to enable prompt engi- neering (Wang et al., 2024c), self-correction (Zhao et al., 2023; Wang et al., 2023; Li et al., 2024b; Wu et al., 2024), evaluating chain-of-thought (Nguyen et al., 2024), and model fine-tuning (Wang et al., 2024b; Tang et al., 2024), we propose leveraging knowledge graphs as weak yet control- lable knowledge reasoners to effectively measure the alignment between LLMs’ multi-step chain- of-thought reasoning and multi-hop knowledge graph trajectories by inverse propensity scores (IPS) (Joachims et al., 2017). In contrast to the existing chain-of-thought evaluation (Nguyen et al., 2024) method, which relies on accurate chain-of-thought grounding on specific knowledge graphs, we propose verbalizing the knowledge graph trajectories and developing a knowledge graph policy that serves as a verbal reasoning mechanism over the graphs. Therefore, we can bridge the heterogeneity between knowledge graph and LLM reasoning forms, and the verbalized knowledge graph policy can be generalized to be compatible with various LLMs. knowledge exploration and exploitation. Then, we propose offline chain-of-thought evaluation and alignment, OCEAN, which evaluates the generated chain of thoughts from off-policy LLMs through collected offline data samples with feedback from a knowledge graph (KG). The improved Knowl- edge Graph - Inverse Propensity Scores (KG-IPS) approach considers the effects of feedback from the knowledge graph policy that aligns the model’s chain-of-thought generation and the behavior policy, which prevents model degeneration. We prove that the KG-IPS estimator provides an unbi- ased estimate of the target policy, with a lower bound for the variance, and establish confidence inter- vals using sub-Gaussian concentration inequalities. To enable direct optimization of LLM policies, we leverage the proposed KG-IPS policy evaluation approach for LLM fine-tuning by directly max- imize estimated policy values through gradient descent. Then we empirically evaluate the optimized LLM policy on three types of chain-of-thought reasoning tasks, and demonstrate the effectiveness of the proposed policy optimization method. We also observe relative performance improvements across evaluation tasks, without affecting LLMs’ generalizability or generation quality. We summa- rize our contributions as follows: tween LLM and knowledge graph reasoning, for effective evaluations of chain-of-thought. which enables efficient alignment with automatic feedback from the knowledge graph. • To facilitate the evaluation and optimization, we model the knowledge-graph preference and derive feedback by developing a policy which verbalizes knowledge-graph trajectories. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 3}, {'section': '2 RELATED WORK', 'after_section': '2 RELATED WORK', 'context_after': 'densities of human feedback to offer fine-grained rewards for RL finetuning, and Sun et al. (2024a) focuses on aligning LLMs with reward models driven by human-defined principles. To address RLHF’s limitations such as heavy reliance on human input, alternative approaches like Reinforce- ', 'paragraph_idx': 9, 'before_section': '2 RELATED WORK', 'context_before': 'LLM Alignment Reinforcement Learning from Human Feedback (RLHF) has been the dominant approach, optimizing LLMs using human-annotated data to align model behavior with user prefer- ences (Ouyang et al., 2022; Bai et al., 2022a). DPO (Rafailov et al., 2024) and RRHF (Yuan et al., ', 'modified_lines': '2023) are proposed to reduce the training instability of RLHF. Wu et al. (2023b) utilizes varying ', 'original_lines': '2023) are proposed to reduce the training instability of RLHF. Wu et al. (2023) utilizes varying 2 Under review as a conference paper at ICLR 2025 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 ', 'after_paragraph_idx': 9, 'before_paragraph_idx': 9}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '3 PRELIMINARY ', 'paragraph_idx': 10, 'before_section': '2 RELATED WORK', 'context_before': 'paths as an MDP and using KGs to ensure both factual accuracy and human-like reasoning. Chain-of-thought Reasoning Chain-of-thought prompting has been widely applied to elicit the ', 'modified_lines': 'strong reasoning abilities of LLMs (Wei et al., 2022; Chu et al., 2023; Xia et al., 2024a). By de- composing a complex problem into a sequence of intermediate sub-tasks, LLMs are able to focus on important details and solve the step-by-step (Huang & Chang, 2023; Yu et al., 2023). Despite the remarkable performance improvements, recent studies have found that LLMs often generate un- faithful chain-of-thought reasoning paths that contain factually incorrect rationales (Turpin et al., 2023; Lanham et al., 2023). To address this, a number of works leverage LLMs’ self-evaluation abilities to verify and refine each reasoning step (Ling et al., 2023; Madaan et al., 2023). As the factual errors in the generated chain-of-thought may also be caused by the limited or outdated para- metric knowledge of LLMs, recent methods incorporate external knowledge sources to further edit unfaithful content in the reasoning path (Zhao et al., 2023; Wang et al., 2023; Li et al., 2024b; Wang et al., 2024d;a). While these methods focus on knowledge augmentation and editing through prompts, our method, in comparison, directly aligns LLM internal knowledge with faithful and factual chain-of-thought, which avoids potential knowledge conflicts between parametric and non- parametric knowledge when generating reasoning paths. ', 'original_lines': 'strong reasoning abilities of LLMs (Wei et al., 2022; Chu et al., 2023; Xia et al., 2024). By decom- posing a complex problem into a sequence of intermediate sub-tasks, LLMs are able to focus on important details and solve the step-by-step (Huang & Chang, 2023; Yu et al., 2023). Despite the remarkable performance improvements, recent studies have found that LLMs often generate unfaith- ful chain-of-thought reasoning paths that contain factually incorrect rationales (Turpin et al., 2023; Lanham et al., 2023). To address this, a number of works leverage LLMs’ self-evaluation abilities to verify and refine each reasoning step (Ling et al., 2023; Madaan et al., 2023). As the factual errors in the generated chain-of-thought may also be caused by the limited or outdated parametric knowledge of LLMs, recent methods incorporate external knowledge sources to further edit unfaith- ful content in the reasoning path (Zhao et al., 2023; Wang et al., 2023; Li et al., 2024b; Wang et al., 2024d;a). While these methods focus on knowledge augmentation and editing through prompts, our method, in comparison, directly aligns LLM internal knowledge with faithful and factual chain-of- thought, which avoids potential knowledge conflicts between parametric and non-parametric knowl- edge when generating reasoning paths. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 9}, {'section': 'Abstract', 'after_section': None, 'context_after': 'and require a large effort of engineering due to the discrepancy between the unstructured generation of LLMs and structured knowledge graphs (Pan et al., 2024). Therefore, we propose to offline evaluate and optimize the target policy aligning with knowledge graph preference. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'function is to evaluate each thought given the state as rt = r(st, ct). Although such formulation of chain-of-thought enables direct LLM on-policy optimization via reinforcement learning, direct interaction with knowledge graphs to collect per-step reward in LLMs can be practically challenging ', 'modified_lines': '', 'original_lines': ' 3 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.2 VERBALIZED KNOWLEDGE GRAPH REASONING', 'after_section': None, 'context_after': '(1) (rt, et) ∼ µ ((rt, et)|e0, r1, e1, . . . , rt−1, et−1) , (2) ', 'paragraph_idx': 14, 'before_section': '3.2 VERBALIZED KNOWLEDGE GRAPH REASONING', 'context_before': 'graph G = (E, V) consisting of the outgoing edges of current entity et−1, (rt, et) ∈ {(r′, e′)|(et−1, r′, e′) ∈ G} , ', 'modified_lines': 'where the transition feasibility of the entity et−1 to all the outgoing edges is entirely determined by G. Knowledge graph reasoning starts with a decomposed triple (e0, r1, e1) of the instruction q, and produces a chain of triplets h = (e0, r1, e1, . . . , rT , eT ) by sampling from a policy µ, ', 'original_lines': ' where the transition feasibility of the entity et−1 to all the outgoing edges is entirely determined by G. The process of knowledge graph reasoning starts with a decomposed triple (e0, r1, e1) of the instruction q, and produces a chain of triplets h = (e0, r1, e1, . . . , rT , eT ) by sampling from a policy µ, ', 'after_paragraph_idx': None, 'before_paragraph_idx': 14}, {'section': '3.1 PROBLEM FORMULATION: CHAIN-OF-THOUGHT AS AN MDP', 'after_section': None, 'context_after': '4.1 OFFLINE EVALUATION AND OPTIMIZATION One of the most broadly used offline evaluation approaches is inverse propensity scores (Ionides, 2008; Dud´ık et al., 2011), which has been used for LLM-based offline policy evaluation for various i=1, where τi = (s(i) t+1)Ti ', 'paragraph_idx': 12, 'before_section': None, 'context_before': 'We propose an offline evaluation of the chain-of-thought generation process aligned with knowledge graph preference. The off-policy estimator can be used for policy optimization that aligns LLMs with more faithful reasoning paths from knowledge graphs (Lin et al., 2023). We develop a small ', 'modified_lines': 'language model as a behavior policy that models the knowledge graph preference. In Figure 1, we illustrate the workflow of our proposed framework OCEAN. purposes (Bhargava et al., 2024; Dhawan et al., 2024; Wu et al., 2022). Given the offline logged chain-of-thought trajectories D = {τi}N t=0, we propose a KG- IPS estimator considering two-folded weights of entity tokens in the knowledge graph preference policy µϕ and of non-entity tokens in the base LLM policy π0 , ', 'original_lines': 'language model as a behavior policy that models the knowledge graph preference. purposes (Bhargava et al., 2024; Dhawan et al., 2024). Given the offline logged chain-of-thought trajectories D = {τi}N t=0, we propose a KG-IPS estimator considering two-folded weights of entity tokens in the knowledge graph preference policy µϕ and of non-entity tokens in the base LLM policy π0 , ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 OFFLINE EVALUATION AND OPTIMIZATION', 'after_section': '4.1 OFFLINE EVALUATION AND OPTIMIZATION', 'context_after': 'maximum value of the weighted terms, and n is the number of samples. Given V (θ) is the true value function of the target policy πθ, applying the concentration inequality for sub-Gaussian variables, the KG-IPS estimator satisfies the following confidence interval with probability at least 1 − δ: ', 'paragraph_idx': 22, 'before_section': '4.1 OFFLINE EVALUATION AND OPTIMIZATION', 'context_before': 'the LLM’s behaviors on non-entity tokens without model degeneration. To further formalize our approach and illustrate the variance inherent in the KG-IPS estimator, we present the following Lemma, which provides a lower bound on the variance, ', 'modified_lines': 'Lemma 2. The variance of the KG-IPS estimator is lower bounded by Ω( M 2 n ), where M denotes the ', 'original_lines': 'Lemma 2. The variance of the KG-IPS estimator is lower bounded by M 2 4n , where M denotes the ', 'after_paragraph_idx': 22, 'before_paragraph_idx': 22}, {'section': '4.1 OFFLINE EVALUATION AND OPTIMIZATION', 'after_section': None, 'context_after': 'To further support our findings, we demonstrate that the optimal policy for the final reward is con- sistent with the optimal policy for the entity-based knowledge graph reward, which means the non- entity-based LLM reward can be considered as a regularization term that does not affect the optimal θ ← θ + ∇θ ˆVKG−IP S(θ). (4) 4.2 KNOWLEDGE GRAPH PREFERENCE MODELING To facilitate the evaluation and optimization, we model knowledge graph preference and derive feedback by developing the behavior policy µϕ which verbalizes knowledge-graph trajectories. Ran- ', 'paragraph_idx': 23, 'before_section': '4.1 OFFLINE EVALUATION AND OPTIMIZATION', 'context_before': '. ', 'modified_lines': 'A detailed analysis of the variance and confidence interval can be found in Appendix B. policy. See Appendix C for a complete analysis. In the end, we could directly optimize the target policy by maximizing the estimated value function through policy gradient, 5 Step 1 (Section 3.2): Verbalized Knowledge Graph ReasoningStep 3 (Section 4.1 OCEAN): Offline Chain-of-thought Evaluation and Optimization Sampled Trajectories from KG :(A Collection 1984–1989, performed by, Jane Siberry)(Jane Siberry, was born in, Toronto)(Toronto, has a, castle)(castle, is named, Casa Loma)Trajectory VerbalizationA Collection 1984–1989 was performed by Jane Siberry. Jane Siberry was born in Toronto. The castle in Toronto is named Casa Loma.Policy Gradient OptimizationPolicy Gradient(Equation 4)Preference Model(Small LM)Target Model(LLMs)Reference Model(Pretrained LLMs)Preference Model(Small LM)Sampled Trajectories from KG :(A Collection 1984–1989, performed by, Jane Siberry)(Jane Siberry, was born in, Toronto)(Toronto, has a, castle)(castle, is named, Casa Loma)Trajectory VerbalizationA Collection 1984–1989 was performed by Jane Siberry. Jane Siberry was born in Toronto. The castle in Toronto is named Casa Loma.Policy Gradient OptimizationStep 2 (Section 4.2): Knowledge Graph Preference ModelingPreference Model(Small LM)Policy Gradient(Equation 4)Target Policy(LLMs)Base Policy(Pretrained LLMs)Preference Policy(Small LM)KG-IPS Estimator(Equation 3)GradientBack-propagationInitialized at time step 0 Published as a conference paper at ICLR 2025 ', 'original_lines': 'The detailed proof and variance analysis are provided in Appendix B. policy. See Appendix C for a complete analysis. In the end, we could directly optimize the target policy by maximizing the estimated value function through policy gradient, (a) Distribution of QA Accuracy (b) Distribution of Relations (c) Distribution of Entities Figure 1: Sampling distributions of (a) trajectories in the knowledge graph that are verbalized as multi-step QA tasks and successfully answered by the LLM itself, (b) relations, and (c) entities in the knowledge graphs and their frequencies of the appearance in the trajectories sampled from the Wikidata5M (Wang et al., 2021) knowledge graph. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 22}, {'section': '3.2 VERBALIZED KNOWLEDGE GRAPH REASONING', 'after_section': None, 'context_after': '∇ϕJ(ϕ) = ∇ϕ ', 'paragraph_idx': 14, 'before_section': None, 'context_before': 'ˆy ∼ f (y|ˆq, c), R(h|c) = E [1 {eT = ˆy}] , ', 'modified_lines': 'where the reward of the trajectory is determined by the answer accuracy. We estimate the reward function R(h|c) as the normalized question-answering accuracy (detailed in Appendix D). Then we fine-tune the preference policy µϕ directly via policy gradient optimization, ', 'original_lines': 'where the reward of the trajectory is determined by the answer accuracy. In Figure 1a, we present the probability distribution of sampled trajectories, with respect to the number of correct answers gen- erated per trajectory from ten differently sampled questions associated with each trajectory. Based on such self-consistency measurement, we estimate the reward function R(h|c) as the normalized 5 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 0.00.20.40.60.8Accuracy01234ProbabilitySampled Relations020040060080010001200FreqSampled Entities0100200300400Freq Under review as a conference paper at ICLR 2025 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 question-answering accuracy. Then we fine-tune the preference policy µϕ directly via policy gradi- ent optimization, ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.2 KNOWLEDGE GRAPH PREFERENCE MODELING', 'after_section': '4.2 KNOWLEDGE GRAPH PREFERENCE MODELING', 'context_after': 'graph trajectories, we observe that the relation distribution is relatively more skewed toward the most frequent relations. This suggests that the verbalized knowledge graph reasoning policy is likely to focus on more frequent reasoning transitions, potentially enhancing its ability to learn meaningful ', 'paragraph_idx': 25, 'before_section': '4.2 KNOWLEDGE GRAPH PREFERENCE MODELING', 'context_before': 'R(hk|ck) log µϕ(yk,t|qk, yk,<t). ', 'modified_lines': 'Based on the distribution of relations (Figure 4b) and entities (Figure 4c) in the sampled knowledge ', 'original_lines': 'Based on the distribution of relations (Figure 1b) and entities (Figure 1c) in the sampled knowledge ', 'after_paragraph_idx': 25, 'before_paragraph_idx': 24}, {'section': '5.1', 'after_section': '5.1', 'context_after': 'The second focus is multi-hop reasoning, where models must combine information from multiple sources. HotpotQA (Yang et al., 2018) requires reasoning across multiple Wikipedia documents. ', 'paragraph_idx': 27, 'before_section': '5.1', 'context_before': 'question answering. For knowledge-intensive reasoning, we use datasets requiring deeper domain understanding. The AI2 Reasoning Challenge (ARC) (Clark et al., 2018) tests models’ advanced reasoning abilities with grade-school science questions. PubMedQA (Jin et al., 2019) assesses ', 'modified_lines': 'biomedical reasoning from research abstracts. Finally, SciQA (Auer et al., 2023) challenges models to reason over scientific knowledge using the Open Research Knowledge Graph (ORKG). ', 'original_lines': 'biomedical reasoning by requiring yes/no/maybe answers from research abstracts. Finally, SciQA (Auer et al., 2023) challenges models to reason over scientific knowledge using the Open Research Knowledge Graph (ORKG). ', 'after_paragraph_idx': 28, 'before_paragraph_idx': 27}, {'section': 'Abstract', 'after_section': None, 'context_after': '5.2 MULTI-HOP QUESTION ANSWERING We evaluate the chain-of-thought reasoning performance of OCEAN compared with base LLMs and ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'supervised learning, we also enable instruction-tuning as a baseline (SFT), which is fine-tuned with the question as instruction and the answer as the response. ', 'modified_lines': '', 'original_lines': 'Knowledge Graph Preference Model. The knowledge graph preference model is developed based on the pre-trained GPT2-Medium model (Radford et al., 2019). We collected 6K question-answering 6 Under review as a conference paper at ICLR 2025 pairs from the Wikidata5M (Wang et al., 2021) knowledge graph based on the sampling strategy in Section 4.2. The sampled knowledge graph trajectories are composed into natural language prefixed by the corresponding questions by the GPT-4 model, which verbalizes the knowledge graph reason- ing trajectories and aligns with generative language models’ behaviors. The model is then fine-tuned with a base learning rate of 1e − 4 for 10 epochs with a linear learning scheduler. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.3 KNOWLEDGE-INTENSIVE QUESTION ANSWERING', 'after_section': None, 'context_after': '7 Model ', 'paragraph_idx': 33, 'before_section': '5.3 KNOWLEDGE-INTENSIVE QUESTION ANSWERING', 'context_before': 'To understand the effectiveness of OCEAN in knowledge-intensive question-answering tasks, we show performance comparison with base LLMs (Base) and supervised fine-tuning (SFT) in Ta- ble 2. Comparing SFT and Base LLMs, we observe that directly aligning knowledge graphs with ', 'modified_lines': 'LLMs may suffer from domain and knowledge inconsistency when downstream tasks require spe- cific domain knowledge, conflicting with the knowledge graph in the fine-tuning stage. We also observe that SFT achieves 4.85% and 0.55% average improvements on the PubMedQA dataset, with and without context respectively, whereas it suffers from 29.60%, 8.35%, 13.6% average per- formance decreases on the remaining tasks. Such significant discrepancies in SFT’s effects across different downstream tasks further show the risk in direct knowledge editing in LLMs. Published as a conference paper at ICLR 2025 ', 'original_lines': 'LLMs may suffer from domain change and knowledge inconsistency when downstream tasks re- quire specific domain knowledge that could conflict with the knowledge graph in the fine-tuning stage. We also observe that SFT achieves 4.85% and 0.55% average improvements of base models on the PubMedQA dataset, with and without context respectively, whereas it suffers from 29.60%, 8.35%, 13.6% average performance decreases on the remaining tasks. Such significant discrepan- cies in SFT’s effects on different downstream tasks further show the risk in direct knowledge editing in LLMs. With the enhancement of OCEAN, question-answering accuracy of knowledge-intensive tasks gener- ally improved, while OCEAN fine-tuned LLMs achieving the best performance on all three datasets, 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 Under review as a conference paper at ICLR 2025 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 33}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'except for PubMedQA without context where SFT achieves better performance due to knowledge transfer from knowledge graph dataset. We also observe consistent policy value improvement on PubMedQA and SciQA, where the original policy values of base LLMs are relatively lower. For ', 'paragraph_idx': 7, 'before_section': '1 INTRODUCTION', 'context_before': 'performance on ARC dataset is without context. We also use the test/validation split for each dataset to report estimated policy values ˆV (θ). We highlight the best metric in bold font for each task. ', 'modified_lines': 'With the enhancement of OCEAN, question-answering accuracy of knowledge-intensive tasks gener- ally improved, while OCEAN fine-tuned LLMs achieving the best performance on all three datasets, ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 7}, {'section': '5.4 COMMONSENSE REASONING', 'after_section': None, 'context_after': '8 6 ANALYSIS ', 'paragraph_idx': 38, 'before_section': None, 'context_before': 'the average performance. We highlight the best method in bold font for each task and LLM. Finally, to demonstrate OCEAN’s generalizability in preserving commonsense knowledge and pre- ', 'modified_lines': 'venting knowledge catastrophic forgetting (Luo et al., 2023; Wu et al., 2025), we evaluate OCEAN with base LLMs (Base) and supervised fine-tuning (SFT) on five commonsense reasoning tasks in Table 3. Since such tasks do not require external domain knowledge, we only evaluate the accuracy of the model’s generated answers. We observe that directly applying supervised fine-tuning (SFT) using knowledge graphs significantly impacts large language models (LLMs), potentially leading to catastrophic forgetting of commonsense knowledge. especially for the backbone LLMs of Gemma- 2 and Mistral-0.2. In contrast, we show that OCEAN achieves robust performance on commonsense reasoning by leveraging off-policy evaluation and optimization from knowledge graph’s feedback. OCEAN manages to maintain comparable performance of base LLMs (e.g., Llama-3 and Phi-3.5), Published as a conference paper at ICLR 2025 which have strong zero-shot commmonsense reasoning abilities. In addition, we observe that for base LLM with relatively lower performance (e.g., Gemma-2 and Mistral-0.2), OCEAN enables con- sistent improvements. Therefore, OCEAN serves as a robust off-policy alignment paradigm to incor- porating knowledge graph reasoning without affecting the generalizability of pretrained LLMs. ', 'original_lines': 'venting knowledge catastrophic forgetting (Luo et al., 2023), we evaluate OCEAN with base LLMs (Base) and supervised fine-tuning (SFT) on five commonsense reasoning tasks in Table 3. Since such tasks do not require chain-of-thought generation or external domain knowledge, we only eval- uate the accuracy of the model’s generated answers. We observe the high impact of direct SFT on knowledge graph on LLMs in potential catastrophic forgetting of commonsense knowledge, es- pecially for the backbone LLMs of Gemma-2 and Mistral-0.2. In contrast, we show that OCEAN achieves robust performance on commonsense reasoning leverage off-policy evaluation and opti- mization from knowlege graph’s feedback. OCEAN manages to maintain comparable performance of base LLMs (e.g., Llama-3 and Phi-3.5), which have strong zero-shot commmonsense reasoning abilities. In addition, we observe that for base LLM with relatively lower performance (e.g., Gemma- 2 and Mistral-0.2), OCEAN enables consistent improvements. Therefore, OCEAN serves as a robust Under review as a conference paper at ICLR 2025 off-policy alignment paradigm to incorporating knowledge graph reasoning without affecting the generalizability and behaviors of pretrained LLMs. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6.2 EVALUATION OF GENERATION QUALITY POST ALIGNMENT', 'after_section': '6.2 EVALUATION OF GENERATION QUALITY POST ALIGNMENT', 'context_after': 'To further evaluate the generation quality of post-alignment LLMs, we use the Self-BLEU (Zhu et al., 2018) and Distinct-2 (Li et al., 2015) scores to evaluate the diversity of the generation, concerning ', 'paragraph_idx': 48, 'before_section': None, 'context_before': 'Figure 2: Comparison results of base LLMs and OCEAN on three evaluation metrics, Self-BLEU, Distinct-2, and AlignScore. Lower Self-BLEU scores and higher Distinct-2 scores indicate better ', 'modified_lines': 'diversity of the generated text, while higher AlignScore indicates better faithfulness. ', 'original_lines': 'diversity of the generated text, while higher AlignScore indicates better faithfulness in the generated answers. ', 'after_paragraph_idx': 48, 'before_paragraph_idx': None}, {'section': '6.3 CASE STUDY', 'after_section': None, 'context_after': '7 CONCLUSION ', 'paragraph_idx': 52, 'before_section': '6.3 CASE STUDY', 'context_before': 'streamlining the reasoning process and arriving at the solution with greater simplicity. Figure 3: Sample comparison between the base model and OCEAN on Llama-3 and Gemma-2. Our ', 'modified_lines': 'method enables a more precise and concise chain of thought. ', 'original_lines': 'method enables a more precise and concise Chain of thought. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 52}, {'section': '5.1', 'after_section': None, 'context_after': 'REFERENCES ', 'paragraph_idx': 29, 'before_section': None, 'context_before': 'proved the unbiasedness of the KG-IPS estimator and provided a lower bound on its variance. Em- pirically, our framework effectively optimizes chain-of-thought reasoning while maintaining LLMs’ general downstream task performance, offering a promising solution for enhancing reasoning capa- ', 'modified_lines': "bilities in large language models. Empirically, our framework not only enhances chain-of-thought reasoning but also offers a robust offline evaluation mechanism for agentic frameworks, enabling the safe assessment of autonomous decision-making processes. Future work could explore integrating this approach into reinforcement learning and multi-agent systems to further validate its utility in complex, dynamic environments. 10 Base Llama-3: 1. The question asks what people typically do while playing guitar. ...... 4. Playing the guitar typically involves plucking or strumming the strings to produce sound, which is a form of making music. 5. The other options don't make sense in the context of playing the guitar: ...... (C) Singing is an activity that can be done while playing guitar, but it's not the primary action people do while playing the instrument. ...... Therefore, the correct answer is (E) making music.Q: What do people typically do while playing guitar? Choose the correct answer from the following choices. (A) cry (B) hear sounds (C) singing (D) arthritis (E) making musicOCEAN Llama-3: 1. The question is asking about what people typically do while playing guitar. ...... 4. Singing is a common activity that people do while playing guitar, as it is a common way to accompany oneself while playing the instrument. 5. Making music is a broad term that encompasses a wide range of activities, including playing the guitar. Based on the above reasoning, the correct answer is: C) singingQ: What is the height above or below sea level called?Sample Answer: elevationBase Gemma-2: 1. What is sea level? Sea level is the average level of the ocean. 2. What is the height of something above or below sea level? This is the vertical distance of an object from sea level. 3. What is the term for this vertical distance? This is the height above or below sea level. Final Answer: The height above or below sea level is called **altitude**. OCEAN Gemma-2: Answer: Elevation. Reasoning: 1. Elevation is the height of a place above or below sea level. Redundant chain-of-thoughtA more concise thinking processInaccurate InformationWrong answerAccurate InformationCorrect Answer Published as a conference paper at ICLR 2025 ", 'original_lines': 'bilities in large language models. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Ale- man, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, et al. Phi-3 technical re- port: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219, 2024. ', 'modified_lines': '', 'original_lines': " 10 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 Base Llama-3: 1. The question asks what people typically do while playing guitar. ...... 4. Playing the guitar typically involves plucking or strumming the strings to produce sound, which is a form of making music. 5. The other options don't make sense in the context of playing the guitar: ...... (C) Singing is an activity that can be done while playing guitar, but it's not the primary action people do while playing the instrument. ...... Therefore, the correct answer is (E) making music.Q: What do people typically do while playing guitar? Choose the correct answer from the following choices. (A) cry (B) hear sounds (C) singing (D) arthritis (E) making musicOCEAN Llama-3: 1. The question is asking about what people typically do while playing guitar. ...... 4. Singing is a common activity that people do while playing guitar, as it is a common way to accompany oneself while playing the instrument. 5. Making music is a broad term that encompasses a wide range of activities, including playing the guitar. Based on the above reasoning, the correct answer is: C) singingQ: What is the height above or below sea level called?Sample Answer: elevationBase Gemma-2: 1. What is sea level? Sea level is the average level of the ocean. 2. What is the height of something above or below sea level? This is the vertical distance of an object from sea level. 3. What is the term for this vertical distance? This is the height above or below sea level. Final Answer: The height above or below sea level is called **altitude**. OCEAN Gemma-2: Answer: Elevation. Reasoning: 1. Elevation is the height of a place above or below sea level. Redundant chain-of-thoughtA more concise thinking processInaccurate InformationWrong answerAccurate InformationCorrect Answer Under review as a conference paper at ICLR 2025 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 ", 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Jianing Wang, Qiushi Sun, Xiang Li, and Ming Gao. Boosting language models reasoning with chain-of-knowledge prompting. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, August ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'sociation for Computational Linguistics, 2022a. URL https://doi.org/10.18653/v1/ 2022.emnlp-main.207. ', 'modified_lines': '', 'original_lines': '14 Under review as a conference paper at ICLR 2025 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Hongyi Yuan, Zheng Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. RRHF: Rank responses to align language models with human feedback. In Thirty-seventh Conference on ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Zihan Yu, Liang He, Zhen Wu, Xinyu Dai, and Jiajun Chen. Towards better chain-of-thought prompting strategies: A survey. arXiv preprint arXiv:2310.04959, 2023. ', 'modified_lines': '', 'original_lines': ' 15 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 OFFLINE EVALUATION AND OPTIMIZATION', 'after_section': None, 'context_after': 'N (cid:88) ', 'paragraph_idx': 17, 'before_section': None, 'context_before': '1 N ', 'modified_lines': 'Ti(cid:88) t=1 Ti(cid:88) t=1 Ti(cid:88) t=1 Ti(cid:88) ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '1 Ti|c(i) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'N (cid:88) ', 'modified_lines': '', 'original_lines': ' i=1 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'E ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 't | ', 'modified_lines': '', 'original_lines': ' Ti(cid:88) t=1 Ti(cid:88) t=1 Ti(cid:88) t=1 Ti(cid:88) t=1 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'E ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Ti(cid:88) t=1 ', 'modified_lines': '', 'original_lines': ' Ti(cid:88) t=1 This completes the proof. B VARIANCE ANALYSIS ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-02 01:07:42
ICLR.cc/2025/Conference
NPizsmaPef
kkGeznktxt
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': '∗These authors contributed equally to this work. ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'is costly (Thomas et al., 2015; Bhargava et al., 2024), risky, and impractical (Yu et al., 2021). Re- cent studies in LLMs leverage human feedback to align models’ behaviors with human preferences in single-turn generation (Ouyang et al., 2022; Rafailov et al., 2024) and multi-step reasoning tasks ', 'modified_lines': '(Joshi et al., 2024). In addition, complicated LLM agentic frameworks, involving multi-agent col- laboration, orchestration, and cooperation, rely heavily on efficient (Roucher et al., 2025; Wu et al., 2023a), robust (Masterman et al., 2024; Nguyen et al., 2024a), and proactive (Yao et al., 2023; Xia et al., 2025; Ma et al., 2023) chain-of-thought reasoning abilities, which needs to be finetuned offline ', 'original_lines': '(Joshi et al., 2024). In addition, complicated LLM agentic frameworks, which involve multi-agent collaboration, orchestration, and cooperation, rely heavily on efficient Roucher et al. (2025); Wu et al. (2023a), robust Masterman et al. (2024); Nguyen et al. (2024a), trustworthy Yao et al. (2024), personalized Li et al. (2024c); Wu et al. (2024b); Zhang et al. (2024b), and proactive Yao et al. ', 'after_paragraph_idx': 3, 'before_paragraph_idx': 3}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'However, considering annotators may not be comprehensive in various types of knowledge back- grounds, human feedback on chain-of-thought reasoning (Joshi et al., 2024) can be more challeng- ', 'paragraph_idx': 3, 'before_section': None, 'context_before': 'Published as a conference paper at ICLR 2025 ', 'modified_lines': '(Putta et al., 2024) before deploying them online. Due to the high cost of deploying LLMs online and interacting with human feedback, Bhargava et al. (2024) further enables offline evaluation of LLMs from logged human feedback to align LLMs’ response generation. ', 'original_lines': '(2023); Xia et al. (2025); Ma et al. (2023) chain-of-thought abilities, which needs to be finetuned offline Putta et al. (2024); Wu et al. (2025) before deploying them online. Due to the high cost of deploying LLMs online and interacting with human feedback, Bhargava et al. (2024) further enables offline evaluation of LLMs from logged human feedback to align LLMs’ response generation. ', 'after_paragraph_idx': 4, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'model fine-tuning (Wang et al., 2024b; Tang et al., 2024), we propose leveraging KGs as weak yet controllable knowledge reasoners to effectively measure the alignment between LLMs’ multi- step chain-of-thought reasoning and multi-hop KG trajectories by inverse propensity scores (IPS) ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'Motivated by recent works in using knowledge graphs (KGs) as side information to enable prompt engineering (Wang et al., 2024c; Xia et al., 2024b), self-correction (Zhao et al., 2023; Wang et al., ', 'modified_lines': '2023; Li et al., 2024b; Wu et al., 2024b), evaluating chain-of-thought (Nguyen et al., 2024b), and ', 'original_lines': '2023; Li et al., 2024b; Wu et al., 2024c), evaluating chain-of-thought (Nguyen et al., 2024b), and ', 'after_paragraph_idx': 5, 'before_paragraph_idx': 5}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '2 Published as a conference paper at ICLR 2025 user feedback (Gilotte et al., 2018; Jeunen, 2019). Recent work (Gao et al., 2024) also develops an OPE estimator for LLM evaluation based on human feedback. Different from previous works, we study and formulate chain-of-thought generation in LLM as an MDP and use knowledge graph ', 'paragraph_idx': 7, 'before_section': '2 RELATED WORK', 'context_before': 'Offline Policy Evaluation Offline policy evaluation (OPE) is essential when online deploying learned policies is risky and impractical (Levine et al., 2020). OPE has been applied to various prac- ', 'modified_lines': 'tical applications, including evaluating the recommender system’s behavior with offline collected ', 'original_lines': 'tical applications, including evaluating the recommender system’s behavior with offline collected ', 'after_paragraph_idx': None, 'before_paragraph_idx': 7}, {'section': '3.1 PROBLEM FORMULATION: CHAIN-OF-THOUGHT AS AN MDP', 'after_section': None, 'context_after': '3 Published as a conference paper at ICLR 2025 i=0. The action space {1, . . . , |V|}Nt in LLMs is a sequence of Nt tokens reasoning paths (ci)t−1 sampled from an identical and finite vocabulary set V. The LLM policy πθ samples next-step thought ', 'paragraph_idx': 12, 'before_section': '3.1 PROBLEM FORMULATION: CHAIN-OF-THOUGHT AS AN MDP', 'context_before': 'Chain-of-thought reasoning can be viewed as a Markov Decision Process (MDP) (Sutton, 2018): starting with the instruction prompt q, the LLM sequentially decides and generates the next-step reasoning path ct that navigates until it arrives at a target final answer y. Given the LLM policy πθ, ', 'modified_lines': 'at time step t, each state st ∈ S comprises of the instruction prompt q and previously generated ', 'original_lines': 'at time step t, each state st ∈ S comprises of the instruction prompt q and previously generated ', 'after_paragraph_idx': None, 'before_paragraph_idx': 12}, {'section': '4.1 OFFLINE EVALUATION AND OPTIMIZATION', 'after_section': '4.1 OFFLINE EVALUATION AND OPTIMIZATION', 'context_after': 't ) + 1{v ∈ c(i) t \\a(i) ', 'paragraph_idx': 16, 'before_section': '4.1 OFFLINE EVALUATION AND OPTIMIZATION', 'context_before': 't ) = 1{v ∈ a(i) ', 'modified_lines': 'where λ(v|s(i) t } · π0(v|s(i)). We follow (Zhang et al., 2024) to use the log-likelihood score of each token in the base policy π0 as the reward function. Es- tablishing the unbiasedness of the KG-IPS estimator is essential for reliable policy evaluation (Jiang & Li, 2016; Bhargava et al., 2024). We formalize this in the following lemma: Lemma 1. The KG-IPS estimator provides an unbiased estimate of the target policy πθ. ', 'original_lines': 'where λ(v|s(i) t } · π0(v|s(i)). We follow (Zhang et al., 2024a) to use the log-likelihood score of each token in the base policy π0 as the reward function. Establishing the unbiasedness of the KG-IPS estimator is essential for reliable policy evaluation (Jiang & Li, 2016; Bhargava et al., 2024). We formalize this in the following lemma: Lemma 1. The KG-IPS estimator provides an unbiased estimate of the target policy πθ. ', 'after_paragraph_idx': 17, 'before_paragraph_idx': 16}, {'section': '4.1 OFFLINE EVALUATION AND OPTIMIZATION', 'after_section': '4.1 OFFLINE EVALUATION AND OPTIMIZATION', 'context_after': 'Lemma 2. The variance of the KG-IPS estimator is lower bounded by Ω( M 2 n ), where M denotes the maximum value of the weighted terms, and n is the number of samples. Given V (θ) is the true value ', 'paragraph_idx': 19, 'before_section': '4.1 OFFLINE EVALUATION AND OPTIMIZATION', 'context_before': 't )/µϕ(v|s(i) ', 'modified_lines': 'The standard IPS estimator is known to have high variances (Metelli et al., 2018) considering large behavior discrepancies (πθ(v|s(i) t )) between the behavior policy µϕ and the target policy πθ. In addition, by separately weighting the entity and non-entity tokens with µϕ and π0 respectively, we avoid the increasing variance accumulated from the long chain-of-thought reasoning process and maintain the LLM’s behaviors on non-entity tokens without model degeneration. To further formalize our approach and illustrate the variance inherent in the KG-IPS estimator, we present the following Lemma, which provides a lower bound on the variance, ', 'original_lines': 'The standard IPS estimator is known to have high variances considering large behavior discrepancies (πθ(v|s(i) t )) between the behavior policy µϕ and the target policy πθ. In addition, by separately weighting the entity and non-entity tokens with µϕ and π0 respectively, we avoid the increasing variance accumulated from the long chain-of-thought reasoning process and maintain the LLM’s behaviors on non-entity tokens without model degeneration. To further formalize our approach and illustrate the variance inherent in the KG-IPS estimator, we present the following Lemma, which provides a lower bound on the variance, ', 'after_paragraph_idx': 19, 'before_paragraph_idx': 19}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Chu-Cheng Lin, Aaron Jaech, Xin Li, Matthew R Gormley, and Jason Eisner. Limitations of autore- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Published as a conference paper at ICLR 2025 ', 'modified_lines': '', 'original_lines': ' Yuanchun Li, Hao Wen, Weijun Wang, Xiangyu Li, Yizhen Yuan, Guohong Liu, Jiacheng Liu, Wenxing Xu, Xiang Wang, Yi Sun, et al. Personal llm agents: Insights and survey about the capability, efficiency and security. arXiv preprint arXiv:2401.05459, 2024c. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-02 11:02:03
ICLR.cc/2025/Conference
kkGeznktxt
9KAfYGMTM8
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': '∗These authors contributed equally to this work. ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': '(Joshi et al., 2024). In addition, complicated LLM agentic frameworks, involving multi-agent col- laboration, orchestration, and cooperation, rely heavily on efficient (Roucher et al., 2025; Wu et al., 2023a), robust (Masterman et al., 2024; Nguyen et al., 2024a), and proactive (Yao et al., 2023; Xia ', 'modified_lines': 'et al., 2025; Ma et al., 2023) chain-of-thought reasoning abilities, which need to be finetuned offline ', 'original_lines': 'et al., 2025; Ma et al., 2023) chain-of-thought reasoning abilities, which needs to be finetuned offline ', 'after_paragraph_idx': 3, 'before_paragraph_idx': 3}]
2025-03-02 11:23:49
ICLR.cc/2025/Conference
9KAfYGMTM8
zijqrXZSCN
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'To enable controllable chain-of-thought alignment in LLMs, we principally track LLMs’ decision- making process in generating chain-of-thought reasoning steps, by formulating the process as a ', 'paragraph_idx': 4, 'before_section': '1 INTRODUCTION', 'context_before': 'However, considering annotators may not have comprehensive knowledge in various types of knowl- edge backgrounds, human feedback on chain-of-thought reasoning (Joshi et al., 2024) can be more challenging to collect. In addition, since chain-of-thought reasoning involves a sequential decision- ', 'modified_lines': 'making process, the volume of collected human feedback may increase exponentially. Due to such challenges, conventional reinforcement learning from human feedback (RLHF) methods (Ouyang et al., 2022; Bai et al., 2022a) can suffer from training inefficiencies and scalability issues. Motivated by recent works in using knowledge graphs (KGs) as side information for prompt engi- neering (Wang et al., 2024c; Xia et al., 2024b), self-correction (Zhao et al., 2023; Wang et al., 2023; Li et al., 2024b; Wu et al., 2024b), evaluating chain-of-thought (Nguyen et al., 2024b), and model fine-tuning (Wang et al., 2024b; Tang et al., 2024), we propose leveraging KGs as weak yet control- lable knowledge reasoners to effectively measure the alignment between LLMs’ multi-step chain- of-thought reasoning and multi-hop KG trajectories by inverse propensity scores (IPS) (Joachims et al., 2017). Unlike the chain-of-thought evaluation method (Nguyen et al., 2024b), which depends on accurate chain-of-thought grounding in specific KGs, we propose to verbalize KG trajectories and develop a KG policy as a verbal reasoning mechanism over the graphs. This approach bridges the gap between KG and LLM reasoning and generalizes the KG policy to various LLMs. ', 'original_lines': 'making process, the volume of collected human feedback can be exponentially increased. Due to such challenges, conventional reinforcement learning from human feedback (RLHF) methods (Ouyang et al., 2022; Bai et al., 2022a) can suffer from training inefficiencies and scalability issues. Motivated by recent works in using knowledge graphs (KGs) as side information to enable prompt engineering (Wang et al., 2024c; Xia et al., 2024b), self-correction (Zhao et al., 2023; Wang et al., 2023; Li et al., 2024b; Wu et al., 2024b), evaluating chain-of-thought (Nguyen et al., 2024b), and model fine-tuning (Wang et al., 2024b; Tang et al., 2024), we propose leveraging KGs as weak yet controllable knowledge reasoners to effectively measure the alignment between LLMs’ multi- step chain-of-thought reasoning and multi-hop KG trajectories by inverse propensity scores (IPS) (Joachims et al., 2017). In contrast to the existing chain-of-thought evaluation (Nguyen et al., 2024b) method, which relies on accurate chain-of-thought grounding on specific KG, we propose verbal- izing the KG trajectories and developing a KG policy that serves as a verbal reasoning mechanism over the graphs. Therefore, we can bridge the heterogeneity between KG and LLM reasoning forms, and the verbalized KG policy can be generalized to be compatible with various LLMs. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 4}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'confidence intervals using sub-Gaussian concentration inequalities. To enable direct optimization of LLM policies, we leverage the proposed KG-IPS policy evaluation approach for LLM fine-tuning by the optimized LLM policy on three types of chain-of-thought reasoning tasks, and demonstrate the • We propose an offline evaluation framework, OCEAN, which bridges the heterogeneity be- ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': 'propose offline chain-of-thought evaluation and alignment, OCEAN, which evaluates the generated chain of thoughts from off-policy LLMs through collected offline data samples with feedback from a KG. The improved Knowledge Graph - Inverse Propensity Scores (KG-IPS) approach considers ', 'modified_lines': 'the effects of feedback from the KG policy that aligns the model’s chain-of-thought generation and the behavior policy, which prevents model degeneration. We prove that the KG-IPS estimator provides an unbiased estimate of the target policy, with a lower bound for the variance, and establish directly maximizing estimated policy values through gradient descent. Then we empirically evaluate effectiveness of the proposed policy optimization method, without affecting LLMs’ generalizability or generation quality. We summarize our contributions as follows: ', 'original_lines': 'the effects of feedback from the KG policy that aligns the model’s chain-of-thought generation and the behavior policy, which prevents model degeneration. We prove that the KG-IPS estimator pro- vides an unbiased estimate of the target policy, with a lower bound for the variance, and establish directly maximize estimated policy values through gradient descent. Then we empirically evaluate effectiveness of the proposed policy optimization method. We also observe relative performance im- provements across evaluation tasks, without affecting LLMs’ generalizability or generation quality. We summarize our contributions as follows: ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 6}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '2 Published as a conference paper at ICLR 2025 LLM Alignment Reinforcement Learning from Human Feedback (RLHF) has been the dominant approach, optimizing LLMs using human-annotated data to align model behavior with user prefer- ', 'paragraph_idx': 7, 'before_section': None, 'context_before': '2 RELATED WORK ', 'modified_lines': 'Offline Policy Evaluation Offline policy evaluation (OPE) is essential when online policy learning is risky and impractical (Levine et al., 2020). OPE has been applied to various practical applica- tions, including evaluating the recommender system’s behavior with offline collected user feedback (Gilotte et al., 2018; Jeunen, 2019). Recent work (Gao et al., 2024) also develops an OPE estima- tor for LLM evaluation based on human feedback. Different from previous works, we study and formulate chain-of-thought generation in LLM as an MDP and use knowledge graph reasoning as automatic feedback to develop a KG-IPS policy value estimator. ', 'original_lines': 'Offline Policy Evaluation Offline policy evaluation (OPE) is essential when online deploying learned policies is risky and impractical (Levine et al., 2020). OPE has been applied to various prac- tical applications, including evaluating the recommender system’s behavior with offline collected user feedback (Gilotte et al., 2018; Jeunen, 2019). Recent work (Gao et al., 2024) also develops an OPE estimator for LLM evaluation based on human feedback. Different from previous works, we study and formulate chain-of-thought generation in LLM as an MDP and use knowledge graph reasoning as automatic feedback to develop a KG-IPS policy value estimator. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 RELATED WORK', 'after_section': '2 RELATED WORK', 'context_after': 'faithful chain-of-thought reasoning paths that contain factually incorrect rationales (Turpin et al., 2023; Lanham et al., 2023). To address this, a number of works leverage LLMs’ self-evaluation abilities to verify and refine each reasoning step (Ling et al., 2023; Madaan et al., 2023). As the ', 'paragraph_idx': 9, 'before_section': '2 RELATED WORK', 'context_before': 'paths as an MDP and using KGs to ensure both factual accuracy and human-like reasoning. Chain-of-thought Reasoning Chain-of-thought prompting has been widely applied to elicit the ', 'modified_lines': 'strong reasoning abilities of LLMs (Wei et al., 2022; Chu et al., 2023; Xia et al., 2024a). By decom- posing a complex problem into a sequence of intermediate sub-tasks, LLMs can focus on important details and solve the problem step by step (Huang & Chang, 2023; Yu et al., 2023). Despite the remarkable performance improvements, recent studies have found that LLMs often generate un- ', 'original_lines': 'strong reasoning abilities of LLMs (Wei et al., 2022; Chu et al., 2023; Xia et al., 2024a). By de- composing a complex problem into a sequence of intermediate sub-tasks, LLMs are able to focus on important details and solve the step-by-step (Huang & Chang, 2023; Yu et al., 2023). Despite the remarkable performance improvements, recent studies have found that LLMs often generate un- ', 'after_paragraph_idx': 9, 'before_paragraph_idx': 8}, {'section': '3.1 PROBLEM FORMULATION: CHAIN-OF-THOUGHT AS AN MDP', 'after_section': '3.1 PROBLEM FORMULATION: CHAIN-OF-THOUGHT AS AN MDP', 'context_after': 'y ∼ πθ(·|q) = πθ(y|q, c) T (cid:89) where each reasoning step ct comprises a sequence of tokens and the number of reasoning step T is determined by the model’s generation. Controllable chain-of-thought generation can be challenging ', 'paragraph_idx': 11, 'before_section': '3.1 PROBLEM FORMULATION: CHAIN-OF-THOUGHT AS AN MDP', 'context_before': 'includes the generation of a trajectory of reasoning steps c = (c1, c2, . . . , cT ), before the final answer prediction y, ', 'modified_lines': 'ct ∼ πθ(· | q, c<t) c<t = (c1, . . . , ct−1), t=1 πθ(ct|q, c<t), ', 'original_lines': 'ct ∼ πθ(·|q) = t−1 (cid:89) i=1 πθ(ci|q, c<i), i=1 πθ(ci|q, c<i), ', 'after_paragraph_idx': 11, 'before_paragraph_idx': 11}, {'section': '3.1 PROBLEM FORMULATION: CHAIN-OF-THOUGHT AS AN MDP', 'after_section': None, 'context_after': '3 Published as a conference paper at ICLR 2025 3.2 VERBALIZED KNOWLEDGE GRAPH REASONING ', 'paragraph_idx': 12, 'before_section': '3.1 PROBLEM FORMULATION: CHAIN-OF-THOUGHT AS AN MDP', 'context_before': 'starting with the instruction prompt q, the LLM sequentially decides and generates the next-step reasoning path ct that navigates until it arrives at a target final answer y. Given the LLM policy πθ, at time step t, each state st ∈ S comprises of the instruction prompt q and previously generated ', 'modified_lines': 'i=0. The action space {1, . . . , |V|}Nt in LLMs is a sequence of Nt tokens as reasoning paths (ci)t−1 a knowledge graph entity or relation identified on a single thought, sampled from an identical and finite vocabulary set V. The LLM policy πθ samples next-step thought based on current state as at ∼ πθ(· | st), which is a sub-sequence in the reasoning path at ⊆ ct identified on the knowledge graph. The surrounding context ct \\ at other than the knowledge graph entity or relation is deterministically generated by LLMs. The transition in chain-of-thought is concatenating each reasoning path to the current state as st+1 = [st, ct]. Then the reward function is to evaluate each thought given the state as rt = r(st, ct). Although such formulation of chain-of-thought enables direct LLM on-policy optimization via reinforcement learning, direct interaction with knowledge graphs to collect per- step reward in LLMs can be practically challenging and require a large effort of engineering due to the discrepancy between the unstructured generation of LLMs and structured knowledge graphs (Pan et al., 2024). Therefore, we propose to offline evaluate and optimize the target policy aligning with knowledge graph preference. ', 'original_lines': 'i=0. The action space {1, . . . , |V|}Nt in LLMs is a sequence of Nt tokens reasoning paths (ci)t−1 sampled from an identical and finite vocabulary set V. The LLM policy πθ samples next-step thought based on current state as at ∼ πθ(at|st), which is a sub-sequence in the reasoning path at ∈ ct and the surrounding context ct\\at is deterministically generated by LLMs. The transition in chain-of- thought is concatenating each reasoning path to the current state as st+1 = [st, ct]. Then the reward function is to evaluate each thought given the state as rt = r(st, ct). Although such formulation of chain-of-thought enables direct LLM on-policy optimization via reinforcement learning, direct interaction with knowledge graphs to collect per-step reward in LLMs can be practically challenging and require a large effort of engineering due to the discrepancy between the unstructured generation of LLMs and structured knowledge graphs (Pan et al., 2024). Therefore, we propose to offline evaluate and optimize the target policy aligning with knowledge graph preference. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 12}, {'section': '3.2 VERBALIZED KNOWLEDGE GRAPH REASONING', 'after_section': None, 'context_after': '(1) (rt, et) ∼ µ ((rt, et)|e0, r1, e1, . . . , rt−1, et−1) , (2) where the goal of such knowledge graph exploration is to arrive at the correct answer entity at the end of the search step T . By knowledge graph exploration, we can collect a set of trajectories H = {hk}K ', 'paragraph_idx': 13, 'before_section': '3.2 VERBALIZED KNOWLEDGE GRAPH REASONING', 'context_before': 'graph G = (E, V) consisting of the outgoing edges of current entity et−1, (rt, et) ∈ {(r′, e′)|(et−1, r′, e′) ∈ G} , ', 'modified_lines': 'where the transition feasibility of the entity et−1 to all the outgoing edges is entirely determined by G. Knowledge graph reasoning starts with a triplet (e0, r1, e1) and produces a chain of triplets h = (e0, r1, e1, . . . , rT , eT ) by sampling from a policy µ, ', 'original_lines': 'where the transition feasibility of the entity et−1 to all the outgoing edges is entirely determined by G. Knowledge graph reasoning starts with a decomposed triple (e0, r1, e1) of the instruction q, and produces a chain of triplets h = (e0, r1, e1, . . . , rT , eT ) by sampling from a policy µ, ', 'after_paragraph_idx': None, 'before_paragraph_idx': 13}, {'section': '4.1 OFFLINE EVALUATION AND OPTIMIZATION', 'after_section': '4.1 OFFLINE EVALUATION AND OPTIMIZATION', 'context_after': 'behavior discrepancies (πθ(v|s(i) t )) between the behavior policy µϕ and the target policy πθ. In addition, by separately weighting the entity and non-entity tokens with µϕ and π0 respectively, ', 'paragraph_idx': 21, 'before_section': '4.1 OFFLINE EVALUATION AND OPTIMIZATION', 'context_before': 't )/µϕ(v|s(i) ', 'modified_lines': 'The standard IPS estimator is known to have a high variance (Metelli et al., 2018) considering large ', 'original_lines': 'The standard IPS estimator is known to have high variances (Metelli et al., 2018) considering large ', 'after_paragraph_idx': 21, 'before_paragraph_idx': 20}, {'section': '4.1 OFFLINE EVALUATION AND OPTIMIZATION', 'after_section': None, 'context_after': '(cid:12) (cid:12) (cid:12) (cid:16) . ', 'paragraph_idx': 21, 'before_section': '4.1 OFFLINE EVALUATION AND OPTIMIZATION', 'context_before': 'formalize our approach and illustrate the variance inherent in the KG-IPS estimator, we present the following Lemma, which provides a lower bound on the variance, Lemma 2. The variance of the KG-IPS estimator is lower bounded by Ω( M 2 ', 'modified_lines': 'n ), where M denotes the maximum value of the weighted terms, and n is the number of samples. For a target policy πθ, let the true value function be defined as V (θ) := E , where rt ∈ [0, 1] is the reward associated with selecting entity e in state st and µ0 is the behavior policy under which the data is collected. Applying the concentration inequality for sub-Gaussian variables, the KG-IPS estimator satisfies the following confidence interval with probability at least 1 − δ: (cid:12) (cid:17) ˆVKG-IPS(θ) − V (θ) (cid:12) (cid:12) ≤ O M (cid:112)log(1/δ)/n (cid:104) πθ(e|st) µϕ(e|st) rt (cid:105) ', 'original_lines': 'n ), where M denotes the maximum value of the weighted terms, and n is the number of samples. Given V (θ) is the true value function of the target policy πθ, applying the concentration inequality for sub-Gaussian variables, the KG-IPS estimator satisfies the following confidence interval with probability at least 1 − δ: (cid:12) ˆVKG-IPS(θ) − V (θ) (cid:12) (cid:12) ≤ O M (cid:112)log(1/δ)/n (cid:17) ', 'after_paragraph_idx': None, 'before_paragraph_idx': 21}, {'section': '4.2 KNOWLEDGE GRAPH PREFERENCE MODELING', 'after_section': None, 'context_after': '5 EXPERIMENTS ', 'paragraph_idx': 23, 'before_section': '4.2 KNOWLEDGE GRAPH PREFERENCE MODELING', 'context_before': 't=0 ', 'modified_lines': 'R(hk|ck) log µϕ(yk,t|qk, yk,<t), where J(ϕ) denotes the overall objective function representing the expected cumulative reward of the policy. Based on the distribution of relations (Figure 4b) and entities (Figure 4c) in the sampled knowledge graph trajectories, we observe that the relation distribution is relatively more skewed toward the most frequent relations. This suggests that the verbalized knowledge graph reasoning policy is likely to focus on more frequent reasoning transitions, potentially enhancing its ability to learn meaningful patterns. In contrast, the entity distribution shows a relatively short tail, which may help mitigate the risk of overfitting to specific entities or knowledge biases. ', 'original_lines': 'R(hk|ck) log µϕ(yk,t|qk, yk,<t). Based on the distribution of relations (Figure 4b) and entities (Figure 4c) in the sampled knowledge graph trajectories, we observe that the relation distribution is relatively more skewed toward the most frequent relations. This suggests that the verbalized knowledge graph reasoning policy is likely to focus on more frequent reasoning transitions, potentially enhancing its ability to learn meaningful patterns. In contrast, the entity distribution shows a relatively short tail, which may help mitigate the risk of overfitting to specific entities or knowledge biases. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 23}]
2025-03-03 10:06:25
ICLR.cc/2025/Conference
CLAEPIK5Al
GGq1hk5iFR
[]
2024-11-28 12:02:35
ICLR.cc/2025/Conference
GGq1hk5iFR
mFjlSmUjkY
[]
2024-11-28 12:06:19
ICLR.cc/2025/Conference
mFjlSmUjkY
4kwOqevXjz
[]
2024-11-28 12:09:43
ICLR.cc/2025/Conference
H14EhqvZXR
60plh7Zt3L
[{'section': '3.1 DATASET COLLECTION', 'after_section': None, 'context_after': 'Qwen2.5 72B Qwen2.5-Math 72B ', 'paragraph_idx': 38, 'before_section': None, 'context_before': 'GPT-4o-mini Gemini 1.5 Flash ', 'modified_lines': 'LLaMA-3.1-70B ', 'original_lines': 'LLaMA-3.1 70B ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '48.7 78.7 76.6 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '77.4 80.7 ', 'modified_lines': '', 'original_lines': '52.3 69.7 62.8 74.3 76.3 61.0 75.6 74.2 62.2 78.1 80.9 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.3 META-EVALUATION (µ-MATH) RESULTS', 'after_section': None, 'context_after': 'We find that using manual CoT instructions instead of Automatic CoT improves or maintains judgment performance, for all but the LLaMA models, as shown in Table 5. LLaMA’s performance drop stems from higher inconclusive judgment rates with CoT (refer to Appendix G). At the same time, Gemini models benefit the most from this transition, gaining over 10% in F1-score and becoming the top-ranked models, surpassing Qwen and GPT models that outperform Gemini with Automatic CoT. It is also evident that being a better solver does not necessarily lead to being a better judge, see additional discussion in Appendix H. Also, the best attainable overall F1-score is only 80.7%, which ', 'paragraph_idx': 64, 'before_section': '4.3 META-EVALUATION (µ-MATH) RESULTS', 'context_before': 'Predictive Value (NPV), with F1 as the primary one are presented. Columns under µ-MATH represent the integral score over the entire benchmark, while µ-MATH <model> is a subset with solutions generated by a specific model. U-MATHText accuracy is added for comparison of model’s performance as a math solver vs as a math ', 'modified_lines': 'judge. Bold indicates the best result within a column. Reference full table in Appendix J. Appendix F provides this data and a visual comparison. ', 'original_lines': 'judge. Bold indicates the best result for each column. Reference full table in Appendix J. Appendix F provides this data and a visual comparisons. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 64}, {'section': '5 CONCLUSION', 'after_section': None, 'context_after': '10 ', 'paragraph_idx': 74, 'before_section': '5 CONCLUSION', 'context_before': 'in advancing the mathematical reasoning capabilities of LLMs and encourage the development of models better equipped to tackle complex, real-world mathematical problems. ', 'modified_lines': 'ETHICS STATEMENT We collected all data in U-MATH and µ-MATH with appropriate permissions, ensuring no personal or proprietary information is included. The datasets consist solely of mathematical problems and solutions, without any sensitive content. The annotators from [ANONYMIZED] are employed in ', 'original_lines': 'ACKNOWLEDGEMENT We thank all contributors from [ANONYMIZED] and [ANONYMIZED] experts who assisted in sourcing and verifying problems, inspect the solutions and provided valuable feedback throughout the development of U-MATH. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 73}, {'section': 'Abstract', 'after_section': None, 'context_after': 'the partner laboratory with [ANONYMIZED]; their annotation time is fully compensated at a fair hourly rate. We open-sourced the datasets and code under suitable licenses to support transparency and research advancement. There are no known conflicts of interest associated with this work. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '592 593 ', 'modified_lines': '', 'original_lines': 'We would like to give special thanks to the dedicated team of [ANONYMIZED] experts who played a crucial role in validating problems and ensuring their quality: [ANONYMIZED]. ETHICS STATEMENT We collected all data in U-MATH and µ-MATH with appropriate permissions, ensuring no personal or proprietary information is included. The datasets consist solely of mathematical problems and solutions, without any sensitive content. The annotators from [ANONYMIZED] are employed in ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Petersen, and Julius Berner. 2024. Mathematical capabilities of chatgpt. Advances in neural information processing systems, 36. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '646 647 ', 'modified_lines': '', 'original_lines': 'Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783. Meng Fang, Xiangpeng Wan, Fei Lu, Fei Xing, and Kai Zou. 2024. Mathodyssey: Benchmarking mathematical problem-solving skills in large language models using odyssey math data. arXiv preprint arXiv:2406.18321. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Nexusflow. 2024. Introducing athene-v2: Advancing beyond the limits of scaling with targeted ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '699 700 701 ', 'modified_lines': '', 'original_lines': ' Yujun Mao, Yoon Kim, and Yilun Zhou. 2024. Champ: A competition-level dataset for fine-grained analyses of llms’ mathematical reasoning capabilities. arXiv preprint arXiv:2401.06961. Meta AI. 2024. Llama 3.2: Revolutionizing edge ai and vision with open, customizable models. https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/. Accessed: 2024-11-15. Mistral AI. 2024. Announsing pixtral-12b. https://mistral.ai/news/pixtral-12b/. Accessed: 2024-10-01. Mistral.ai. 2024. Mathstral. https://mistral.ai/news/mathstral/. Accessed: 2024-10-01. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Harsh Mehta, Heidi Howard, Malcolm Reynolds, Lora Aroyo, Quan Wang, Lorenzo Blanco, Albin Cassirer, Jordan Griffith, Dipanjan Das, Stephan Lee, Jakub Sygnowski, Zach Fisher, James Besley, Richard Powell, Zafarali Ahmed, Dominik Paulus, David Reitter, Zalan Borsos, Rishabh Joshi, ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Aditya Barua, Victor Ungureanu, Yuan Zhang, Bat-Orgil Batsaikhan, Mateo Wirth, James Qin, Ivo Danihelka, Tulsee Doshi, Martin Chadwick, Jilin Chen, Sanil Jain, Quoc Le, Arjun Kar, Madhu Gurumurthy, Cheng Li, Ruoxin Sang, Fangyu Liu, Lampros Lamprou, Rich Munoz, Nathan Lintz, ', 'modified_lines': '', 'original_lines': ' 13 Under review as a conference paper at ICLR 2025 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Ambrose Slone, Kedar Soparkar, Disha Shrivastava, James Cobon-Kerr, Michael Sharman, Jay Pavagadhi, Carlos Araya, Karolis Misiunas, Nimesh Ghelani, Michael Laskin, David Barker, Qiujia Li, Anton Briukhov, Neil Houlsby, Mia Glaese, Balaji Lakshminarayanan, Nathan Schucher, ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Cheng, Adam Bloniarz, Jaehoon Lee, Pedram Pejman, Paul Michel, Stephen Spencer, Vladimir Feinberg, Xuehan Xiong, Nikolay Savinov, Charlotte Smith, Siamak Shakeri, Dustin Tran, Mary Chesus, Bernd Bohnet, George Tucker, Tamara von Glehn, Carrie Muir, Yiran Mao, Hideto Kazawa, ', 'modified_lines': '', 'original_lines': ' 14 Under review as a conference paper at ICLR 2025 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Bhavishya Mittal, Nilesh Tripuraneni, Yannis Assael, Thomas Brovelli, Prateek Jain, Mihajlo Velimirovic, Canfer Akbulut, Jiaqi Mu, Wolfgang Macherey, Ravin Kumar, Jun Xu, Haroon Qureshi, Gheorghe Comanici, Jeremy Wiesner, Zhitao Gong, Anton Ruddock, Matthias Bauer, ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Brian McWilliams, Sankalp Singh, Annie Louis, Wen Ding, Dan Popovici, Lenin Simicich, Laura Knight, Pulkit Mehta, Nishesh Gupta, Chongyang Shi, Saaber Fatehi, Jovana Mitrovic, Alex Grills, Joseph Pagadora, Dessie Petrova, Danielle Eisenbud, Zhishuai Zhang, Damion Yates, ', 'modified_lines': '', 'original_lines': ' 15 Under review as a conference paper at ICLR 2025 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Zhilin Wang, Alexander Bukharin, Olivier Delalleau, Daniel Egert, Gerald Shen, Jiaqi Zeng, Oleksii Kuchaiev, and Yi Dong. 2024b. Helpsteer2-preference: Complementing ratings with preferences. ', 'modified_lines': '', 'original_lines': '16 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-28 12:19:58
ICLR.cc/2025/Conference
Smyb1DGt8a
r4tBPYaKTv
[]
2024-11-27 17:05:06
ICLR.cc/2025/Conference
r4tBPYaKTv
3noLuVyeci
[{'section': 'Abstract', 'after_section': None, 'context_after': 'Classification tasks are capable of expressing aleatoric uncertainty by design in the form of class probabilities. However, in the case of regression, the models traditionally only output a single value, or a simple parametric distribution at best. An expectation of 10mm of rainfall could indicate relative certainty of a rainy day with actual 10mm of rainfall, but could also indicate 10% chance of a deadly 1 storm with 100mm of rain. In such cases, even a heteroscedastic distributional regression model with a simple unimodal output lacks the nuance required for informed decision making. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'problem setup, where the model lacks information to make an exact prediction. Most tasks involve key pieces of information missing, factors which influence the target, yet ones we cannot provide to the model as inputs. Either due to inherent randomness or unobservable latent variables, in most ', 'modified_lines': 'real-world scenarios it is theoretically impossible to always predict the exact target value. As an inherent constraint, aleatoric uncertainty cannot be reduced with additional data of the same kind. ∗Emails: {kdomokos, jungadam, benczur}@info.ilab.sztaki.hu, [email protected] †HUN-REN SZTAKI 1https://github.com/proto-n/torch-naut ‡Sz´echenyi University, Gy˝or, Hungary §Ericsson Hungary Published as a conference paper at ICLR 2025 ', 'original_lines': 'real-world scenarios it is theoretically impossible to always predict the exact target value. However, as a constraint of the problem setup, aleatoric uncertainty cannot be reduced with additional data of the same kind and must be accounted for, even when data scarcity is not an issue. 1https://anonymous.4open.science/r/crps_iclr-B270 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 Under review as a conference paper at ICLR 2025 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'Our experiments show that our models can represent a wide range of complex distributions, using a relatively small nondeterministic part considering modern deep learning hardware and model sizes. ', 'paragraph_idx': 8, 'before_section': '1 INTRODUCTION', 'context_before': 'BNN-s (Gawlikowski et al., 2023) or Mixture Density Networks (MDN) (Bishop, 1994). Scaling. Our approach scales well to larger network sizes. While part of the model is evaluated ', 'modified_lines': 'multiple times, the incurred cost scales with target rather than input complexity, see Appendix C.8. ', 'original_lines': 'multiple times, the added cost scales with target rather than input complexity, see Appendix C.8. ', 'after_paragraph_idx': 9, 'before_paragraph_idx': 8}, {'section': '2.2 BAYESIAN NEURAL NETWORKS', 'after_section': None, 'context_after': 'ES(F, y) = ', 'paragraph_idx': 15, 'before_section': None, 'context_before': '4.5 MULTIVARIATE REGRESSION The formulas described so far only apply to univariate targets. Fortunately, a multivariate general- ', 'modified_lines': 'ization of the CRPS exists, called the energy score (Gneiting & Raftery, 2007; Jordan et al., 2019): ', 'original_lines': 'ization of the CRPS exists, called the energy score (Gneiting et al., 2008; Jordan et al., 2019): ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '5.2 MULTIVARIATE REGRESSION ', 'paragraph_idx': 8, 'before_section': None, 'context_before': 'and layered-multi-head model both learn fast and manage to learn the baseline distribution in high- detail. However, without layering, visible artifacts remain even after 2000 epochs. Ultimately, the layered-multi-head variant learns fast and ends up representing the distribution with high-accuracy ', 'modified_lines': 'at the same time. See Appendix C.3 for experimental setup and more samples during training. ', 'original_lines': 'at the same time. See Appendix C.3 for samples during training and experimental setup. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 EXPERIMENTS', 'after_section': '5 EXPERIMENTS', 'context_after': 'Appendix C.7, revealing that the method is also capable of achieving the best NLL result on the concrete dataset, with a score of 2.73 (MH variant with ensembling). Multi-head variants under- perform WCRPSe on many (though not all) datasets, indicating that the single-head architecture’s ', 'paragraph_idx': 46, 'before_section': None, 'context_before': 'Gaussian methods with and without techniques for epistemic uncertainty. Best scores are highlighted in bold, however one should also note the error ranges included with scores when reading the results. ', 'modified_lines': 'Overall, it is clear that relaxing the Gaussian assumption generally helps achieve higher NLL scores, except on concrete with Dropout winning, and naval where a simple BNN dominates (−6.84, see Appendix C.11). Strongest scores are mostly shared between MDNbnn and WCRPSe, with the latter scoring best overall. We present scores for variants of CRPS-based methods in an ablation study in ', 'original_lines': 'Overall, it is clear that relaxing the Gaussian assumption generally helps methods achieve higher scores. In NLL, only Dropout manages to score a victory out of the classic baselines. The rest of the strongest scores are shared between MDNbnn and WCRPSe, with the latter ultimately scoring best overall. We present evaluations for variants of CRPS-based methods in an ablation study in ', 'after_paragraph_idx': 46, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Ethan Goan and Clinton Fookes. Bayesian neural networks: An introduction and survey. Case Studies in Applied Bayesian Data Science: CIRM Jean-Morlet Chair, Fall 2018, pp. 45–87, 2020. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Tilmann Gneiting and Roopesh Ranjan. Comparing density forecasts using threshold-and quantile- weighted scoring rules. Journal of Business & Economic Statistics, 29(3):411–422, 2011. ', 'modified_lines': '', 'original_lines': ' Tilmann Gneiting, Larissa I Stanberry, Eric P Grimit, Leonhard Held, and Nicholas A Johnson. Assessing probabilistic forecasts of multivariate quantities, with an application to ensemble pre- dictions of surface winds. Test, 17:211–235, 2008. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.2 BAYESIAN NEURAL NETWORKS', 'after_section': None, 'context_after': 'essentially 0 numerically, causing the NLL metric to take on the value of ∞. 31 Table 15: RMSE results on the UCI Datasets benchmark (lower is better). Scores that perform at least on par when compared to the results in Table 2 are highlighted in bold. ', 'paragraph_idx': 15, 'before_section': None, 'context_before': 'method on reinforcement learning tasks and lacks detailed quantitative analysis. Therefore, we choose not to benchmark against it using its original datasets. Fortunately, a recent uncertainty quantification framework (Lehmann et al., 2024) implements BNN-LV for regression. Given that ', 'modified_lines': 'the first author of Depeweg et al. (2018) is also a co-author of Lehmann et al. (2024) and was 6When a ground truth point falls too far outside the predicted distribution, the likelihood function becomes Published as a conference paper at ICLR 2025 ', 'original_lines': 'the first author of Depeweg et al. (2018) is also a co-author of (Lehmann et al., 2024) and was 7When a ground truth point falls too far outside the predicted distribution, the likelihood function becomes 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-02-28 10:07:24
ICLR.cc/2025/Conference
OXFy4RGCxo
4GuqiLnRom
[{'section': '6 EXPERIMENT', 'after_section': None, 'context_after': 'Tasks f (x∗ ', 'paragraph_idx': 35, 'before_section': None, 'context_before': '430 431 ', 'modified_lines': 'Table 2: Overall results in GTOPX unconstrianed scenario. Results are averaged over five times, and “±” indicates the standard deviation. f (x∗ OFF) means the optimal objective function value in the offline dataset. FS (i.e., final score) means the function value that an offline optimization algorithm finds in the final step during optimization process. FS measures optimality while SI measures stability. ', 'original_lines': 'Table 2: Overall results in unconstrianed scenario. Results are averaged over five times, and “±” indicates the standard deviation. f (x∗ OFF) means the optimal objective function value in the offline dataset. FS (i.e., final score) means the function value that an offline optimization algorithm finds in the final step during optimization process. FS measures optimality while SI measures stability. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6 EXPERIMENT', 'after_section': None, 'context_after': '196.21 151.68 216.34 112.11 ', 'paragraph_idx': 33, 'before_section': '6 EXPERIMENT', 'context_before': 'Metrics ', 'modified_lines': 'GTOPX 2 GTOPX 3 GTOPX 4 GTOPX 6 ', 'original_lines': 'Gtopx 2 Gtopx 3 Gtopx 4 Gtopx 6 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 33}, {'section': '6 EXPERIMENT', 'after_section': None, 'context_after': 'Tasks f (x∗ ', 'paragraph_idx': 36, 'before_section': None, 'context_before': '−∞ −∞ ', 'modified_lines': 'Table 3: Overall results in GTOPX constrianed scenario. Details are the same as Table 2. The symbol “-” means that the algorithm cannot work because of too few solutions that satisfly the constraints. ', 'original_lines': 'Table 3: Overall results in constrianed scenario. Details are the same as Table 2. The symbol “-” means that the algorithm cannot work because of too few solutions that satisfly the constraints. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-18 11:49:22
ICLR.cc/2025/Conference
4GuqiLnRom
tB80c69TvF
[{'section': '4.1 CUSTOMIZED NARROW DISTRIBUTIONS OF OFFLINE DATASETS', 'after_section': None, 'context_after': 'The experimental results in Table 13 show that the FS values of all algorithms are close to the optimal solutions of the offline dataset, indicating that under this experimental setting, it is challenging for the algorithms to further improve upon the current solutions. This phenomenon can be attributed to ', 'paragraph_idx': 19, 'before_section': None, 'context_before': 'mally affected by data volume and are able to find better solutions, demonstrating their exceptional capabilities. ', 'modified_lines': 'First, we provide a black-box ground-truth oracle objective function for each task. An initial dataset is obtained by uniformly sampling and evaluating the objective function. Datasets of different difficulty levels are constructed based on this sorted initial dataset. Specifically, the right-n% of the solution space range and the left-m% of the solution space range are removed to show different solution space distributions. Through the above steps, an offline dataset with a narrow distribution in real tasks is constructed by removing solutions. To simulate a more realistic data distribution, we choose a dataset size that is 1000 times the variable dimension. At the same time, to further simulate the narrow distribution, missing the m% and the n% are used to construct different solution distributions. In this paper, we select the middle 50% of the data (i.e., m% − n% = 50%) to construct a simulated dataset as a reasonable baseline. Since the proposed benchmark is highly flexible and customizable, it enables users to modify the data volume, m%, and n% as needed. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'E HYPERPARAMETER ANALYSIS We outline guidelines for hyperparameter selection for methods evaluated in the benchmark. These ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '−∞ −∞ ', 'modified_lines': '', 'original_lines': '29 Under review as a conference paper at ICLR 2025 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-28 08:32:16
ICLR.cc/2025/Conference
HY5h2VtGlV
71PZVV3I5d
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'A wide array of studies are dedicated to alleviating efficiency issues, among which context compres- sion is a promising direction (Mu et al., 2023; Chevalier et al., 2023; Ge et al., 2024; Jiang et al., ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'such as long-document understanding (Jiang et al., 2024b), long-content creation (Bai et al., 2024), and long-term memorization/reasoning (Zhang et al., 2024). To address these needs, modern LLMs are built with extended context windows (e.g., 128K) that enable remarkable long-context processing ', 'modified_lines': 'capabilities (OpenAI, 2024; Yang et al., 2024; et al., 2024). Despite their effectiveness, LLMs encounter efficiency challenges in processing long contexts. On one hand, transformer-based LLMs incur substantial computational costs due to the quadratic complexity of self attention. On the other hand, they require tremendous GPU memory to hold the KV cache of the entire sequence for faster decoding. Both computation and memory costs increase as the context length grows. ', 'original_lines': 'capabilities (OpenAI et al., 2024; Yang et al., 2024; Dubey et al., 2024). Despite their effectiveness, LLMs encounter efficiency challenges in processing long contexts. On one hand, transformer-based LLMs incur substantial computational costs due to the quadratic complexity of self attention. On the other hand, they require tremendous GPU memory to hold the KV cache of the entire sequence for faster decoding. Both computation and memory costs increase as the context length grows. ', 'after_paragraph_idx': 4, 'before_paragraph_idx': 3}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'long contexts. Besides, they try to compress the context “all-at-once”, lacking a fine-grained han- dling of the detailed information. Moreover, these soft tokens must be re-encoded before generation, resulting in inferior efficiency in both training and inference. Lastly, these methods are learned to compress with a fixed number of soft tokens, thus, it’s hard to customize the compression ratio for downstream tasks. While some alternamtive methods focus on deleting unimportant tokens (Jiang et al., 2023b; Li et al., 2024b), they depend on the input question to estimate the token importance, limiting their efficiency in real-world multi-turn scenarios. ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'Despite the current progresses, it it remains a tough challenge to compress long contexts. Specifi- cally, existing methods usually summarize the context into a few soft tokens (Chevalier et al., 2023; Ge et al., 2024), which constitute the major bottleneck to summarize the complex information within ', 'modified_lines': ' ∗Peitian Zhang and Zheng Liu are the co-first authors †Zheng Liu is the corresponding author 1 Published as a conference paper at ICLR 2025 Figure 1: Overview of Activation Beacon. The context is partitioned into chunks. Each chunk is further split into fine-grained units and interleaved with beacon tokens according to a compression ratio (2 in the figure). The LLM encodes one chunk at a time, compressing the context into beacon tokens’ activations, which are accumulated and reused for encoding following chunks. ', 'original_lines': ' 1 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 Under review as a conference paper at ICLR 2025 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 Figure 1: Overview of Activation Beacon. The context is partitioned into chunks. Each chunk is further split into fine-grained units and interleaved with beacon tokens according to a compression ratio (2 in the figure). The LLM encodes one chunk at a time, compressing the context into beacon tokens’ activations, which are accumulated and reused for encoding following chunks. ', 'after_paragraph_idx': 5, 'before_paragraph_idx': 5}, {'section': '3.1 COMPRESSION MECHANISM', 'after_section': '3.1 COMPRESSION MECHANISM', 'context_after': 'Next, for each chunk Xi, we determine a compression ratio αi (w is evenly divisible by αi). The chunk is further split into fine-grained units of size α. Then a group of ki = w/αi beacon to- kens, Bi = [⟨b⟩i ', 'paragraph_idx': 18, 'before_section': '3.1 COMPRESSION MECHANISM', 'context_before': '[x1, . . . , xn] Partition −−−−−−→ [X1, . . . X⌈n/w⌉], Xi = [x(i−1)w+1, . . . , xiw]1 = [xi ', 'modified_lines': ' 1, . . . , xi w]. (1) ', 'original_lines': '', 'after_paragraph_idx': 18, 'before_paragraph_idx': 18}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Xi ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '1, . . . , ⟨b⟩i ki ', 'modified_lines': '', 'original_lines': ' 1, . . . , xi w]. (1) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.1 COMPRESSION MECHANISM', 'after_section': '3.1 COMPRESSION MECHANISM', 'context_after': '1, . . . , xi (cid:124) αi , ⟨b⟩i ', 'paragraph_idx': 18, 'before_section': None, 'context_before': '⟨b⟩i (cid:124) ', 'modified_lines': 'xi ', 'original_lines': 'xi <i ', 'after_paragraph_idx': 18, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '(9) where F Att is the computation during self attention, and F Oth is the computation of other modules. For full-attention models, s = n, spst = 0. For “beaconed” models, the FLOPs is: ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'the input context length, spst denote the cached context length, the forward FLOPs is: FLOPs = F Att(s, spst) + F Oth(s), ', 'modified_lines': '', 'original_lines': ' 5 FFNLayer NormSelf-AttnSelf−Attn𝑏Layer Norm𝑥11𝑥21𝑥31𝑥41⟨b⟩11𝑥12𝑥22𝑥32𝑥42𝑥11⟨b⟩21⟨b⟩12⟨b⟩22⟨b⟩11⟨b⟩21𝑥21𝑥31𝑥41𝑥12⟨b⟩12⟨b⟩22𝑥22𝑥32𝑥42𝑋1′𝑋2′Forward①Forward② Under review as a conference paper at ICLR 2025 Figure 3: Comparison of the forward FLOPs of different models using full attention and Activation Beacon (the compression ratio is annotated in the brackets). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.2 LEARNING METHOD', 'after_section': None, 'context_after': '4.1 SETTINGS ', 'paragraph_idx': 35, 'before_section': None, 'context_before': '60.3 66.4 ', 'modified_lines': '{2, 4, 8, 16, 32} during training. At inference, one can choose one compression ratio according to the specific efficiency requirement in downstream tasks and stick to it for all chunks. 4 EXPERIMENTS Our experiment mainly study Activation Beacon’s effectiveness (§4.2), efficiency (§4.3), and flex- ibility (§4.4) in long context compression. Besides, we explore Activation Beacon’s impact on short-context capabilities of the backbone LLM (§4.5) and the effect of each technical design (§4.6). ', 'original_lines': 'Figure 4: Evaluation on Needle-in-a-Haystack. Activation Beacon can accurately retrieves the nee- dle most of the time, despite the context is far longer than its training data. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.2 COMPRESSION EFFECTIVENESS', 'after_section': '4.2 COMPRESSION EFFECTIVENESS', 'context_after': '2We use Llama-2 because AutoCompressor and ICAE are based on it, both of which are important baselines. 7 409672811046713653168392002423210263962958232768Context Length01122334455667788100Depth Percent5.01.0(A) Llama-2-7B with Activation Beacon4096182043231246421605297463888746102855116963131072Context Length01122334455667788100Depth Percent5.09.07.0(B) Qwen-2-7B with Activation Beacon12345678910Accuracy Table 2: Evaluation on Multi-Needle-in-a-Haystack where the questions are issued one-by-one in a multi-turn conversation setting. All compression methods use a x8 compression ratio. Activation ', 'paragraph_idx': 42, 'before_section': '4.2 COMPRESSION EFFECTIVENESS', 'context_before': 'To verify the compression effectiveness of Activation Beacon, we evaluate it on LongBench (Bai et al., 2023), which consists of a variety of long-context tasks with 32K maximum length, including ', 'modified_lines': 'question answering, summarization, few-shot learning, and code completion. Since Llama-2 has a context window of 4K, we truncate the context longer than 4K from middle before inputting to it. For compression methods implemented on Llama-2, we set adaptive compression ratio, translating to x2 compression for 4K-8K contexts, x4 compression for 8K-16K contexts, and x8 compression for 16K-32K contexts. For methods implemented on Qwen-2, we apply a uniform compression ratio of x4. The results are reported in Table 1. We highligh two observations in the following. Published as a conference paper at ICLR 2025 Figure 4: Evaluation on Needle-in-a-Haystack. Activation Beacon can accurately retrieves the nee- dle most of the time, despite the context is far longer than its training data. Firstly, Activation Beacon achieves superior compression quality over other compression base- lines across all tasks. Concretely, it siginificantly outperforms ICAE and AutoCompressor, which verifies that several soft tokens are not enough to encapsulate the rich information within long con- texts. LongLLMLingua also lags far behind Activation Beacon because it need to delete too many tokens given a high compression ratio (e.g., x4, x8), which may destroy the coherence of the con- text and lose important information. Despite SnapKV’s top performance among baselines, it cannot compress context longer than the backbone LLM’s window. This is because it estimates the token importance based on self attention, which becomes inaccurate once the context exceeds the window size, limiting its practical usage when compressing long contexts. Secondly, Activation Beacon achieves comparable performance to the fine-tuned uncom- pressed baseline (Full-FT) even though Full-FT takes in the entire context without compression. This indicates that Activation Beacon is able to compress long contexts without evident informa- tion loss, which validates its high compression quality yielded from the progressive compression workflow. Furthermore, Activation Beacon improves upon Llama-2 by a large margin despite their context window is the same, i.e. 4K. The gain is because Llama-2 (Full) directly uses the truncated 4K context, while Activation Beacon compresses the 32K context into 4K compact activations. This implies that Activation Beacon can effectively introduce useful information from Llama-2’s unseen context. Therefore, it can be viewed as an efficient approach for context extension. We further evaluate Activation Beacon on Needle-in-a-Haystack (NIAH) following the official set- tings (gkamradt, 2023) to investigate whether it will lose fine-grained information. The accuracy is estimated by ChatGPT (ranges from 1 to 10). For both Llama-2 and Qwen-2, we set adaptive compression ratio as introduced above. The results are shown in Figure 4. It can be observed that Activation Beacon precisely retrieves the needle most of the time. Note that Activation Beacon conducts query-independent compression, which means it has no prior knowledge of what to com- press and what not. Hence, this remarkable performance again validates our tailored compression mechanism and learning method can preserve the fine-grained contextual information. Moreover, Activation Beacon is only trained on context shorter than 20K, while its compression capability can generalize to far longer contexts (e.g., 128K). 4.3 COMPRESSION EFFICIENCY We evaluate the efficiency of Activation Beacon based on the Multi-Needle-in-a-Haystack task fol- lowing NeedleBench (Li et al., 2024a). Specifically, we fix the context length to 32K for Llama-2 and 128K for Qwen-2, and insert 3 different needles at different positions. The task is organized in a multi-turn conversation setting, where the model is asked to retrieve one specific needle in each turn. The experiment is repeated 20 times for each model with distinct needle positions. In Table 2, we report the accuracy and the end-to-end latency of compression & generation (measured in seconds). It can be observed that Activation Beacon enjoys lower latency than other compression base- lines. Notably, it is 1.8x faster than AutoCompressor because it does not have to re-encode the soft tokens from previous chunks. It also leads to 9.3x and 3.6x acceleration upon LongLLMLingua and SnapKV given three turns, respectively. This is because both baselines are query-dependent while Activation Beacon is not, which eliminates the need to re-compute the compression results for 8 Published as a conference paper at ICLR 2025 ', 'original_lines': '324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': 43, 'before_paragraph_idx': 42}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Figure 5: Evaluation on Needle-in-a-Haystack with various compression ratios based on Llama-2. Activation Beacon achieves top compression quality across all compression configurations. different input questions. Moreover, Activation Beacon demonstrates consistent speed-up over the Full-FT baseline, achieving 2x acceleration at 128K context length. This matches our estimation in Figure 3(b) as Activation Beacon (x8) saves half of the computation. In the meanwhile, since ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '10.659 2.981 ', 'modified_lines': '', 'original_lines': 'question answering, summarization, few-shot learning, and code completion. Since Llama-2 has a context window of 4K, we truncate the context longer than 4K from middle before inputting to it. For compression methods implemented on Llama-2, we set adaptive compression ratio, translating to x2 compression for 4K-8K contexts, x4 compression for 8K-16K contexts, and x8 compression for 16K-32K contexts. For methods implemented on Qwen-2, we apply a uniform compression ratio of x4. The results are reported in Table 1. We highligh two observations in the following. Firstly, Activation Beacon achieves superior compression quality over other compression base- lines across all tasks. Concretely, it siginificantly outperforms ICAE and AutoCompressor, which verifies that several soft tokens are not enough to encapsulate the rich information within long con- texts. LongLLMLingua also lags far behind Activation Beacon because it need to delete too many tokens given a high compression ratio (e.g., x4, x8), which may destroy the coherence of the con- text and lose important information. Despite SnapKV’s top performance among baselines, it cannot compress context longer than the backbone LLM’s window. This is because it estimates the token importance based on self attention, which becomes inaccurate once the context exceeds the window size, limiting its practical usage when compressing long contexts. Secondly, Activation Beacon achieves comparable performance to the fine-tuned uncom- pressed baseline (Full-FT) even though Full-FT takes in the entire context without compression. This indicates that Activation Beacon is able to compress long contexts without evident informa- tion loss, which validates its high compression quality yielded from the progressive compression workflow. Furthermore, Activation Beacon improves upon Llama-2 by a large margin despite their context window is the same, i.e. 4K. The gain is because Llama-2 (Full) directly uses the truncated 4K context, while Activation Beacon compresses the 32K context into 4K compact activations. This implies that Activation Beacon can effectively introduce useful information from Llama-2’s unseen context. Therefore, it can be viewed as an efficient approach for context extension. We further evaluate Activation Beacon on Needle-in-a-Haystack (NIAH) following the official set- tings (gkamradt, 2023) to investigate whether it will lose fine-grained information. The accuracy is estimated by ChatGPT (ranges from 1 to 10). For both Llama-2 and Qwen-2, we set adaptive compression ratio as introduced above. The results are shown in Figure 4. It can be observed that Activation Beacon precisely retrieves the needle most of the time. Note that Activation Beacon conducts query-independent compression, which means it has no prior knowledge of what to com- press and what not. Hence, this remarkable performance again validates our tailored compression mechanism and learning method can preserve the fine-grained contextual information. Moreover, Activation Beacon is only trained on context shorter than 20K, while its compression capability can generalize to far longer contexts (e.g., 128K). 8 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 Under review as a conference paper at ICLR 2025 4.3 COMPRESSION EFFICIENCY We evaluate the efficiency of Activation Beacon based on the Multi-Needle-in-a-Haystack task fol- lowing NeedleBench (Li et al., 2024a). Specifically, we fix the context length to 32K for Llama-2 and 128K for Qwen-2, and insert 3 different needles at different positions. The task is organized in a multi-turn conversation setting, where the model is asked to retrieve one specific needle in each turn. The experiment is repeated 20 times for each model with distinct needle positions. In Table 2, we report the accuracy and the end-to-end latency of compression & generation (measured in seconds). It can be observed that Activation Beacon enjoys lower latency than other compression base- lines. Notably, it is 1.8x faster than AutoCompressor because it does not have to re-encode the soft tokens from previous chunks. It also leads to 9.3x and 3.6x acceleration upon LongLLMLingua and SnapKV given three turns, respectively. This is because both baselines are query-dependent while Activation Beacon is not, which eliminates the need to re-compute the compression results for ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Since Activation Beacon inter- leaves beacon tokens with raw to- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'we recommend to use x8 compression ratio as it preserves most information with high efficiency. 4.5 SHORT-CONTEXT CAPABILITIES ', 'modified_lines': '', 'original_lines': ' it Model Method MMLU ARC-C BoolQ GSM8K Table 3: Activation Beacon preserves the short-context capa- bilities of the backbone LLM. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Iz Beltagy, Matthew E. Peters, and Arman Cohan. Longformer: The long-document transformer. CoRR, abs/2004.05150, 2020. URL https://arxiv.org/abs/2004.05150. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'abs/2408.07055, 2024. doi: 10.48550/ARXIV.2408.07055. URL https://doi.org/10. 48550/arXiv.2408.07055. ', 'modified_lines': '', 'original_lines': '10 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 Under review as a conference paper at ICLR 2025 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Ziyan Jiang, Xueguang Ma, and Wenhu Chen. Longrag: Enhancing retrieval-augmented generation with long-context llms. CoRR, abs/2406.15319, 2024b. doi: 10.48550/ARXIV.2406.15319. URL ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Han, Amir H. Abdi, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. Minference 1.0: Accelerating pre-filling for long-context llms via dynamic sparse attention, 2024a. URL https://arxiv.org/abs/2407.02490. ', 'modified_lines': '', 'original_lines': ' 13 Under review as a conference paper at ICLR 2025 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Xuejing Liu, ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Haoran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jianxin Yang, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Keqin Chen, Kexin Yang, ', 'modified_lines': '', 'original_lines': ' 15 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'A TRAINING DATA In the pre-training phase, we use 1B tokens from RedPajama. We add an eos token to the end of ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'quantization. 2023. doi: 10.13140/RG.2.2.28167.37282. URL https://rgdoi.net/10. 13140/RG.2.2.28167.37282. ', 'modified_lines': '', 'original_lines': '16 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'ing, resulting in inferior performance on these tasks. However, this problem should be mitigated by adjusting the composition of training data. We add 200 synthetic samples (100 for VT and 100 for CWE) with 20K maximum context length to the training data and fine-tune the model. The new ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ing/aggregation tasks (VT and CWE). A similar drop can also be observed on Full-FT, too. One likely reason for this disadvantage is that our current fine-tuning recipe only uses one-hop QA data (as stated in Appedix A), which does not teach the model to perform complex reasoning or count- ', 'modified_lines': '', 'original_lines': ' 18 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 Under review as a conference paper at ICLR 2025 Table 7: Evaluation on RULER (Hsieh et al., 2024). Initially, Activation Beacon lags behind full- attention models on reasoning/aggregation tasks, yet the gap can be easily compensated by addi- tional fine-tuning with synthetic 200 samples. Model Qwen-2.5-7B Method Full Full-FT Ours Ours + Synthetic FT NIAH AVG VT CWE FWE QA AVG 79.06 80.13 78.43 80.91 88.00 71.95 25.30 85.30 41.04 32.28 10.12 59.30 66.67 64.76 60.00 72.18 40.25 52.38 52.15 51.27 Table 8: Evaluation of 7B and 14B models on LongBench (Bai et al., 2023). Activation Beacon always maintains a comparable performance to the expensive full-attention fine-tuned baseline. Few-Shot Code Single-Doc Multi-Doc Method Length Summ. Model Qwen-2.5-7B Qwen-2.5-14B Full Full-FT Ours Full Full-FT Ours 32K 32K 32K 32K 32K 32K 41.9 42.7 42.5 42.5 43.9 43.4 45.2 46.1 45.8 52.9 50.5 49.9 26.5 26.7 26.8 25.1 27.1 27.1 69.1 67.6 67.4 71.7 68.8 68.5 64.9 66.3 66.4 66.7 67.1 67.4 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-02-23 10:10:53
ICLR.cc/2025/Conference
71PZVV3I5d
jtgqRVvhdy
[{'section': 'Abstract', 'after_section': None, 'context_after': 'Jesse Mu, Xiang Lisa Li, and Noah D. Goodman. Learning to compress prompts with gist tokens. CoRR, abs/2304.08467, 2023. doi: 10.48550/ARXIV.2304.08467. URL https://doi.org/ 10.48550/arXiv.2304.08467. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'doi: 10.48550/ARXIV.2310.07240. URL https://doi.org/10.48550/arXiv.2310. 07240. ', 'modified_lines': '', 'original_lines': '12 Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'generate precisely 128 tokens. We use a uniform x8 compression ratio. The peak GPU memory during the entire generation process is also reported. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '52.15 51.27 ', 'modified_lines': '', 'original_lines': 'C ADDITIONAL EFFICIENCY ANALYSIS We further evaluate the efficiency of Activation Beacon by decomposing the latency of pre-filling and decoding. Specifically, we set the context length to 32K and 128K and enforce the model to ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-02-23 11:02:17
ICLR.cc/2025/Conference
mtWl7DqwdI
pLKPd0w3z4
[]
2025-02-19 01:41:16
ICLR.cc/2025/Conference
pLKPd0w3z4
fJSw80QUUp
[{'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'largely untapped. Aided Evolutionary Search for Robot Design Automation. Leveraging a novel reflection mechanism termed DiRect, we elicit more knowledgeable exploratory behaviors from LLMs based on past search trajectories, reshaping the exploration- ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'balance exploitation with exploration, often leading to inferior solution diversity, as well as poor generalizability of problem solving across different task settings. These unsolved issues render the prowess of LLMs in robot design automation ', 'modified_lines': 'In this work, we present LASeR – Large Language Model- ', 'original_lines': 'In this work, we present LASeR – Large Language Model- ', 'after_paragraph_idx': 2, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'ities of LLMs, facilitating generalizable design processes that effectively inspire zero-shot robot proposals for new applications. Our simulated experiments on voxel-based soft robots showcase distinct advantages of LASeR over competitive ', 'modified_lines': 'baselines. Code at https://github.com/WoodySJR/LASeR. ', 'original_lines': 'baselines. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'et al., 2024b). More and more recent studies have explored the use of LLMs as “intelligent search operators”. By receiving previously found solutions through prompts, LLMs effectively draw upon their in-context learning and pattern completing abilities to iteratively propose improved candidate ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'making, and generalization capabilities (Achiam et al., 2023; Touvron et al., 2023; Team et al., 2023; Team, 2023), sparking a flurry of research interest in their application to optimization problems. Earlier efforts embarked on leveraging LLMs to aid traditional search heuristics within evolutionary ', 'modified_lines': 'algorithms (EAs), such as selecting parent solutions for mutation and crossover (Liu et al., 2024a; Ye et al., 2024) or serving as surrogate models and candidate samplers in Bayesian Optimization (Liu ', 'original_lines': 'algorithms (EAs), such as selecting parent solutions for mutation and crossover (Liu et al., 2024a; Ye et al., 2024) or serving as surrogate model and candidate sampler in Bayesian Optimization (Liu ', 'after_paragraph_idx': 3, 'before_paragraph_idx': 3}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'Salesman Problem and numerical functions, (Liu et al., 2024a; Brahmachary et al., 2024; Huang et al., 2024a), as well as real-world scenarios spanning code generation (Morris et al., 2024; Romera- Paredes et al., 2024), robotic control (Lange et al., 2024), protein design (Tran & Hy, 2024), etc. ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'shown great promise in minimizing reliance on handcrafted search heuristics, facilitating convenient problem specification in natural language and rendering evolutionary processes more interpretable. To date, they have showcased proficiency in classic optimization problems such as the Traveling ', 'modified_lines': ' ∗Corresponding authors. 1 Published as a conference paper at ICLR 2025 ', 'original_lines': '', 'after_paragraph_idx': 4, 'before_paragraph_idx': 3}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '2024; Romera-Paredes et al., 2024). It remains to be investigated whether the reasoning capabilities of LLMs could be further harnessed to guide more intelligent exploratory behaviors in the search space. For the other, current LLM-aided evolutionary approaches generally lack a strong connection to the specific nature of real-world problems, which leads to suboptimal performances and solutions that can not generalize well. To address the aforementioned limitations, here we propose LASeR – Large Language Model- Aided Evolutionary Search for Robot Design Automation. LASeR distinguishes itself from previ- ous LLM-aided evolutionary frameworks with a more delicate exploration strategy and generalizable optimization processes. Specifically, we present a novel Diversity Reflection Mechanism termed Di- viable modifications to enhance diversity while preserving essential functional substructures. This mechanism thus fosters more knowledgeable exploratory behaviors that closely align with task ob- jectives. Furthermore, by exploiting the abundant descriptive information available in robotic tasks, ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': 'measures have been taken to address this issue, including adjustments to the temperature parameter (Yang et al., 2024; Liu et al., 2024a; Pluhacek et al., 2024; Ma et al., 2024) or utilizing pre-existing natural selection techniques such as binary tournament selection and “island models” (Qiu et al., ', 'modified_lines': 'Recently, LLMs have also made their way into the realm of robot design automation. Robot design automation represents a persistent challenge in modern robotics that aims to evolve robot morphol- ogy with minimal human intervention (Hu et al., 2022; 2023; Song et al., 2024a). However, related work is sparse and only represent rudimentary attempts. To our best knowledge, the only pertinent studies are Zhang (2024), Qiu et al. (2024) and Lehman et al. (2023). While Zhang (2024) uti- lizes LLMs to tune the hyperparameters of traditional EAs, the latter two pioneer the use of LLMs as search operators for robot design. Nonetheless, they bear the same limitations as listed above, which greatly hinder the application of LLMs to robot design automation. In particular, with grow- ing interest in soft robots due to their versatility and biomimetic properties, their vast design spaces and intricacy of interaction dynamics among body parts cause existing search algorithms to gener- ally fall short. This highlights the need for more judicious exploration that navigates a variety of design options while ensuring progressive enhancement in functionality (Bhatia et al., 2021; Shah et al., 2021; Song et al., 2024a; Saito & Oka, 2024). Furthermore, as it is common to have access to a repository of pre-designed robots from related tasks when designing for new applications, it is highly relevant to explore the inter-task reasoning capabilities of LLMs to facilitate positive transfer of prior design experience, thus fostering more generalizable design processes. Rect, which strategically instructs an LLM to reflect upon previously generated designs and suggest ', 'original_lines': ' 1 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 Under review as a conference paper at ICLR 2025 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 Recently, LLMs have also made their way into the realm of robot design automation, which repre- sents a persistent challenge in modern robotics that aims to evolve robot morphology with minimal human intervention (Hu et al., 2022; 2023; Song et al., 2024a). However, related work is sparse and only represent rudimentary attempts. To our best knowledge, the only pertinent studies are Zhang (2024), Qiu et al. (2024) and Lehman et al. (2023). While Zhang (2024) utilizes LLMs to tune the hyperparameters of traditional EAs, the latter two pioneer the use of LLMs as search op- erators for robot design. Nonetheless, they bear the same limitations as listed above, which greatly hinder the application of LLMs to robot design automation. In particular, with growing interest in soft robots due to their versatility and biomimetic properties, their vast design spaces and intricacy of interaction dynamics among body parts cause existing search algorithms to generally fall short, highlighting the need for more judicious exploration that navigate a variety of design options while ensuring progressive enhancement in functionality. (Bhatia et al., 2021; Shah et al., 2021; Song et al., 2024a; Saito & Oka, 2024). Furthermore, as it is common to have access to a repository of pre-designed robots from related tasks when designing for new applications, it is highly relevant to explore the inter-task reasoning capabilities of LLMs to facilitate positive transfer of prior design experience, thus fostering more generalizable design processes. Rect, which strategically instruct an LLM to reflect upon previously generated designs and suggest ', 'after_paragraph_idx': None, 'before_paragraph_idx': 5}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '2 RELATED WORK Large Language Models as Evolutionary Search Operators. Large Language Models (LLMs) represent a class of deep generative neural networks comprising billions or trillions of parameters and pretrained on web-scale texual data. In recent years, LLMs have demonstrated impressive rea- soning, decision making, and generalization capabilities (Achiam et al., 2023; Touvron et al., 2023; Team et al., 2023; Team, 2023), which have sparked a flurry of research into exploiting them for op- ', 'paragraph_idx': 8, 'before_section': '1 INTRODUCTION', 'context_before': 'mer is particularly relevant for enhancing the robustness of robotic systems in volatile environments. (ii) With evolution firmly grounded in the background information of optimization tasks, we unlock the inter-task reasoning capabilities of LLMs in evolutionary computation, hopefully inspiring fu- ', 'modified_lines': 'ture work to further promote the generalizability of LLM-aided evolution across different problem settings. (iii) By unleashing the prowess of LLMs for robot design automation, we also aim to in- spire future work that synergizes both robotic design and control with LLMs, achieving closed-loop development of embodied agents. 2 Published as a conference paper at ICLR 2025 ', 'original_lines': 'ture work on further promoting generalizability of LLM-aided evolutionary processes across differ- ent problem settings. (iii) By fully unleashing the prowess of LLMs for robot design automation, we also aim to inspire future work that synergizes both design and control with LLMs, achieving closed-loop development of embodied agents. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 8}, {'section': 'Abstract', 'after_section': None, 'context_after': 'substitutes for the manually designed search heuristics in traditional evolutionary algorithms (EAs), acting as novel, intelligent search operators. Since Lehman et al. (2023) introduced this LLM- aided evolutionary paradigm, subsequent studies have extended its methodology and showcased its ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'through iterative interactions. Moreover, LLMs are adept at conditioning problem-solving processes on various kinds of prior knowledge expressed in natural language, without needing tedious mathe- matical formulations (Song et al., 2024b). All these favorable attributes position LLMs as promising ', 'modified_lines': '', 'original_lines': ' 2 Under review as a conference paper at ICLR 2025 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '3 LASER: LLM-AIDED EVOLUTIONARY SEARCH FOR ROBOT DESIGN ', 'paragraph_idx': 10, 'before_section': '2 RELATED WORK', 'context_before': 'expertise and poorly generalizable. In this respect, Large Language Models, with their strong in- context learning abilities and extensive prior knowledge, hold the promise to transform the robotic design process (Stella et al., 2023). Nevertheless, the exploration of LLMs in this respect is sparse ', 'modified_lines': 'and warrants further investigation (Lehman et al., 2023; Zhang, 2024; Qiu et al., 2024). ', 'original_lines': 'and warrants further investigation (Lehman et al., 2023; Zhang, 2024; Qiu et al., 2024). While our work is based on simulation, we note that there is ongoing research on the realization of soft robotics in the physical world, using polymers with pneumatic chambers (Kriegman et al., 2020b; Legrand et al., 2023) or even self-replicating cells (Kriegman et al., 2020a; 2021) and continually narrowing the sim-to-real gap. We believe that with the collective efforts of material scientists, computer scientists, (bio)mechanical engineers, etc., soft robotics would see rapid advances and finds its way to everyday life in the near future. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 10}, {'section': '3.1 ALGORITHM FRAMEWORK', 'after_section': '3.1 ALGORITHM FRAMEWORK', 'context_after': 'the bi-level optimization framework commonly employed in robot design automation. Specifically, the inner loop optimizes a controller for each robot morphology through reinforcement learning, 3 Figure 1: (a) algorithm overview of LASeR; (b) the Diversity Reflection (DiRect) Mechanism; (c) an example illustrating how Diversity Reflection works on a carrying robot. The illustration takes ', 'paragraph_idx': 12, 'before_section': None, 'context_before': '3.1 ALGORITHM FRAMEWORK ', 'modified_lines': 'As illustrated in Figure 1(a) and detailed by Algorithm 1 in Appendix P, we integrate an LLM into Published as a conference paper at ICLR 2025 ', 'original_lines': 'As illustrated in Figure 1(a) and detailed by Algorithm 1 in Appendix R, we integrate an LLM into with the resulting task performance serving as the fitness evaluation. The outer loop evolves a population of robot morphologies by carrying out natural selection and generating new offspring solutions in each generation. Here, instead of traditional evolutionary algorithms (EAs) that rely on Under review as a conference paper at ICLR 2025 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 ', 'after_paragraph_idx': 12, 'before_paragraph_idx': None}, {'section': '3.1 ALGORITHM FRAMEWORK', 'after_section': '3.1 ALGORITHM FRAMEWORK', 'context_after': 'manually designed search heuristics for generating offspring, an LLM is properly prompted to be our search operator. This is achieved by providing the LLM with previously evaluated robots and various kinds of metadata as context. However, we still bootstrap the evolutionary process with a ', 'paragraph_idx': 12, 'before_section': '3.1 ALGORITHM FRAMEWORK', 'context_before': ', ', 'modified_lines': 'with the resulting task performance serving as the fitness evaluation. The outer loop evolves a population of robot morphologies by carrying out natural selection and generating new offspring solutions in each generation. Here, instead of traditional evolutionary algorithms (EAs) that rely on ', 'original_lines': '', 'after_paragraph_idx': 12, 'before_paragraph_idx': 12}, {'section': '3.3 DIRECT: DIVERSITY REFLECTION MECHANISM', 'after_section': None, 'context_after': '3.4 LLM FOR INTER-TASK KNOWLEDGE TRANSFER ', 'paragraph_idx': 17, 'before_section': '3.3 DIRECT: DIVERSITY REFLECTION MECHANISM', 'context_before': 'according to these suggestions. In section 4 we show that this reflection mechanism fosters more beneficial exploratory behavior in the search space, leading to more diversified robot designs while maintaining relevance to the task objective. Figure 1(c) displays a specific example where DiRect ', 'modified_lines': 'helps to modify a newly proposed carrying robot. The similarity threshold s is an important hyper- parameter that controls the performance of LASeR. We include general principles for choosing s, supported by experimental evaluations, in Appendix L. ', 'original_lines': 'helps to modify a newly proposed carrying robot. We include general principles for choosing the similarity threshold s, supported by experimental evaluations, in Appendix N. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 17}, {'section': '3.5 FITNESS EVALUATION', 'after_section': '3.5 FITNESS EVALUATION', 'context_after': 'actuation signals, we measure the fitness of a robot by calculating the cumulative reward it receives over a complete episode, which reflects its performance in accomplishing a given task. For further details on the PPO algorithm, please refer to Schulman et al. (2017). 4 EXPERIMENTS • Q1: Can LASeR outperform state-of-the-art baselines in robot design automation? • Q2: To what extent does DiRect improve the exploration-exploitation tradeoff of LLM-aided evolution? • Q3: Does task metadata bring additional benefits to single-task robot design automation? More- over, does it aid inter-task experience transfer and enable zero-shot robot design for new tasks? • Q4: Previous studies have shown that different temperature parameters and versions of LLMs yield varying evolutionary outcomes. What are the specific impacts of these factors in our context? ', 'paragraph_idx': 19, 'before_section': '3.5 FITNESS EVALUATION', 'context_before': 'ment learning algorithms by incorporating importance sampling into gradient estimation, allowing for the reuse of sample trajectories across multiple parameter updates. The PPO algorithm alternates between two key phases – data collection and policy update – until a predefined number of iterations ', 'modified_lines': 'are completed. With an optimized controller that maps environmental observations to appropriate We begin this section with an introduction to our experimental setups, and then analyze the results of our comparison and ablation studies in detail. Our experiments are designed to address the following questions: ', 'original_lines': 'is completed. With an optimized controller that maps environmental observations to appropriate We begin this section with an introduction to our experimental setups, and then analyze the results of our comparison and ablation studies in detail. Our code is available on anonymous GitHub for replicability1. Our experiments are designed to address the following questions: ', 'after_paragraph_idx': 19, 'before_paragraph_idx': 19}, {'section': '3.2 PROMPT DESIGN', 'after_section': None, 'context_after': 'provided in Appendix B. For more information on EvoGym, please refer to Bhatia et al. (2021). Baselines. We compare our method against the following baselines: (i) Bayesian Optimization 6 Evaluation Metrics. We employ the following metrics to evaluate the performance of various approaches: (i) Maximal Fitness, defined as the fitness of the best-performing robot design achieved within a specific number of evaluations. This metric is commonly used in robot design automation to assess optimization efficiency. (ii) Diversity: Given the significance of developing diverse robotic ecosystems to handle volatile environments, we measure the diversity of high-performing robot robot designs (Saito & Oka, 2024), and the other is the total number of distinct high-performing robot designs. We further aggregate the two values via weighted averaging, where the latter is multiplied by 0.1 so that they are roughly on the same scale and given equal importance. Please tures of 1 and 1.5. Following the common practice in previous VSR studies (Song et al., 2024a; Saito & Oka, 2024; Dong et al., 2023; Bhatia et al., 2021), we choose the simple yet effective control pro- 4.2 COMPARISON STUDIES ', 'paragraph_idx': 15, 'before_section': None, 'context_before': 'Benchmark Setting. We base our experiments on Evolution Gym (EvoGym; Bhatia et al., 2021), a simulation environment designed for voxel-based soft robots (VSRs). In EvoGym, VSRs are repre- sented in a grid-like layout and consist of five types of voxels: rigid voxels, soft voxels, horizontal ', 'modified_lines': 'actuators, vertical actuators, and empty voxels. VSRs achieve motion control by altering the vol- umes of actuators either horizontally or vertically according to action signals. For benchmarking of single-task optimization, we select four task instances: one locomotion task, Walker-v0, and three manipulation tasks, Carrier-v0, Pusher-v0 and Catcher-v0. For experiments of inter-task knowl- edge transfer, we use BridgeWalker-v0 and UpStepper-v0. A detailed introduction to these tasks is (BO) (Kushner, 1964; Mockus, 1974)), a classic algorithm for optimizing expensive-to-evaluate functions. It employs a probabilistic model as the surrogate function and samples candidate solu- tions based on predicted mean and uncertainty. (ii) Speciated Evolver (SE) (Medvet et al., 2021)), a variant of the genetic algorithm (GA; Michalewicz, 2013) that divides the population into species (iii) RoboGAN (Hu et al., 2022), an to preserve diversity and prevent premature convergence. estimation-of-distribution algorithm (EDA) that utilizes the Generative Adversarial Network (GAN) to track the distribution of high-performing robot designs and generate new candidate solutions. (iv) The last baseline, which we term LLM-Tuner, is adapted from Zhang (2024) that uses LLMs to supervise the hyperparameter tuning of a genetic algorithm. Drawing comparison with LLM-Tuner would directly verify the benefits of LLMs serving as intelligent search operators. We addition- ally draw comparisons with two latest generative model-based evolutionary algorithms, MorphVAE (Song et al., 2024a) and OPRO (Yang et al., 2024), with results presented in Appendix F. Published as a conference paper at ICLR 2025 designs1 from two perspectives: one is the average edit distance among all pairs of high-performing refer to Appendix J for a detailed discussion on diversity measurement. We also include an analysis of computational efficiency in Appendix O. Implementation Details. We use GPT-4o-mini for both LASeR and LLM-Tuner, with the tempera- ture parameter set as 0.7. For ablation studies, we additionally try out GPT-3.5-Turbo and tempera- tocol for fitness evaluation, i.e., Multilayer Perceptron (MLP) as the controller for each robot design and PPO algorithm (Schulman et al., 2017) for policy training. Following previous studies on VSR design (Song et al., 2024a; Saito & Oka, 2024; Dong et al., 2023; Bhatia et al., 2021), robot designs are constrained to a 5×5 bounding box for an expressive yet tractable search space. Nevertheless, as demonstrated in Appendix G, our approach is scalable to larger design spaces. For fair comparison, each method is permitted 1000 robot evaluations. Experimental results of comparative studies are averaged across five independent runs. Our experiments are conducted on a server equipped with Intel Xeon processors running at 2.20 GHz and four NVIDIA Tesla RTX GPUs, with the system operating under Ubuntu 22.04. We relegate additional parameter settings to Appendix C. For further implementation details, please refer to our code repository2. ', 'original_lines': 'actuators, vertical actuators, and empty voxels. VSRs achieve motion control by altering the sizes of actuators either horizontally or vertically according to action signals. For benchmarking, we select three task instances: one locomotion task, Walker-v0, which requires a robot to walk as quickly as possible on flat terrain, and two manipulation tasks, Carrier-v0 and Pusher-v0, where the robot must carry or push a rectangular object besides fast locomotion. A detailed introduction to these tasks is (BO; Kushner, 1964; Mockus, 1974), a classic algorithm designed to optimize expensive-to- evaluate functions. It employs a probabilistic model (e.g. Gaussian Process) as a surrogate for the objective function and determines where to sample based on predicted mean and uncertainty. (ii) Speciated Evolver (SE; Medvet et al., 2021), a variant of the genetic algorithm (GA; Michalewicz, 2013) that divides the population into species to preserve diversity and prevent premature conver- gence. (iii) RoboGAN (Hu et al., 2022), an estimation-of-distribution algorithm (EDA) that utilizes the Generative Adversarial Network (GAN) to track the distribution of high-performing robot de- signs and generate new candidate solutions. (iv) The last baseline, which we term LLM-Tuner, is adapted from Zhang (2024) that uses LLMs to supervise the hyperparameter tuning of a genetic al- gorithm. Drawing comparison with LLM-Tuner would directly verify the benefits of LLMs serving as intelligent search operators. We additionally draw comparisons with two latest baselines, with results presented in Appendix G. 1https://anonymous.4open.science/r/LASeR-D5C2 Under review as a conference paper at ICLR 2025 designs2 from two perspectives: one is the average edit distance among all pairs of high-performing refer to Appendix L for a detailed discussion on diversity measurement. We also include an analysis of computational efficiency in Appendix Q. Implementation Details. We use GPT-4o-mini for both LASeR and LLM-Tuner, with the temper- ature parameter set as 0.7. For ablation studies, we additionally try out GPT-3.5-Turbo and tempera tocol for fitness evaluation, i.e. Multilayer Perceptron (MLP) as the controller for each robot design and PPO algorithm for policy training. Following previous studies on VSR design (Song et al., 2024a; Saito & Oka, 2024; Dong et al., 2023; Bhatia et al., 2021), robot designs are constrained to a 5 × 5 bounding box for an expressive yet tractable search space. Nevertheless, as demonstrated in Appendix I, our approach is scalable to larger design spaces. For fair comparison, each method is permitted 1000 robot evaluations. Experimental results are averaged across three independent runs to reduce randomness (we have currently implemented two more sets of repeated experiments for LASeR and LLM-Tuner, the most competitive baseline, and the results (with significance tests) are reported in Appendix H). Our experiments are conducted on a server equipped with Intel Xeon pro- cessors running at 2.20 GHz and four NVIDIA Tesla RTX GPUs, with the system operating under Ubuntu 22.04. We relegate additional parameter settings to Appendix C. For further implementation details, please refer to our code repository. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.2 COMPARISON STUDIES', 'after_section': '4.2 COMPARISON STUDIES', 'context_after': '(a) Walker-v0 ', 'paragraph_idx': 25, 'before_section': '4.2 COMPARISON STUDIES', 'context_before': 'only one exception on Walker-v0, where LLM-Tuner demonstrates slightly faster convergence in the early stage of evolution but ends up further from optimality. The superior performance of LASeR compared to LLM-Tuner highlights that LLMs have more important roles to play beyond merely ', 'modified_lines': 'tuning hyperparameters for traditional EAs. Results of significance tests are in Appendix D. ', 'original_lines': 'tuning hyperparameters for traditional EAs. ', 'after_paragraph_idx': 25, 'before_paragraph_idx': 25}, {'section': '4.2 COMPARISON STUDIES', 'after_section': None, 'context_after': '7 BO SE RoboGAN LLM-Tuner Walker-v0 N/A N/A Carrier-v0 10.94 (N/A) Pusher-v0 N/A 4.2.2 INTER-TASK KNOWLEDGE TRANSFER 3The result is averaged over ten robot designs in each case. 8 4.3 ABLATION STUDIES 4.3.1 EFFECTIVENESS OF DIRECT LASeR LASeR w/o DiRect Walker-v0 16.96 (1.38) Carrier-v0 7.28 (1.13) Pusher-v0 10.50 (1.04) (a) Walker-v0 (b) Carrier-v0 (c) Pusher-v0 9 4.3.2 EFFECTIVENESS OF TASK-RELATED METADATA ', 'paragraph_idx': 25, 'before_section': '4.2 COMPARISON STUDIES', 'context_before': '(c) Pusher-v0 ', 'modified_lines': '(d) Catcher-v0 Figure 2: Comparative results of single-task optimization efficiency. Table 4.2.1 further demonstrates the diversity of high-performing robot designs achieved by dif- ferent methods. We observe that LASeR surpasses all baselines in three of the four tasks. We additionally compare the fitness performance of robot designs before and after being modified by DiRect, and find no significant difference (see Appendix E). This suggests that our Diversity Reflec- tion Mechanism indeed encourages the LLM to introduce variability into robot design while keeping 1We first calculate the 90% quantile of fitnesses obtained by all methods, and consider robot designs with fitness exceeding this threshold as high-performing. 2https://github.com/WoodySJR/LASeR 02505007501000Number of Evaluations10.510.610.7Maximal Fitness02505007501000Number of Evaluations510Maximal Fitness02505007501000Number of Evaluations7.510.012.5Maximal Fitness02505007501000Number of Evaluations0.70.8Maximal FitnessLASeR (ours) BO SE RoboGAN LLM-Tuner Published as a conference paper at ICLR 2025 its functionality largely intact. These results validate the distinct advantage of DiRect to promote beneficial exploratory behaviors directed towards high-performing regions. It is worth noting that the Bayesian Optimization algorithm, known for a balance between exploration and exploitation in its acquisition function, actually compromises a great deal of optimization efficiency for exploration and fails to generate high-performing robots in many cases. On the contrary, LASeR reshapes the exploration-exploitation tradeoff of LLM-aided evolution to yield dual benefits in optimization ef- ficiency and diversity. For separate results of the edit distance and the number of high-performing designs, please refer to Appendix J. However, we note that the diversity of LASeR in Catcher-v0 is not as competitive. We suspect this is due to task complexity. Specifically, for more challeng- ing tasks like Catcher-v0, while LLMs are still capable of extrapolating from existing solutions (as shown in Figure 2(d)), they struggle to recognize finer-grained functional structures. As a result, they might have difficulty introducing variability without compromising performance. We believe that helping LLMs better understand the roles of different parts within robot morphology—such as by incorporating images/videos of robots interacting with the environment—could be a promising direction for future research. Table 1: Comparative results of diversity (reported as mean (std)). LASeR (ours) 5.24 (0.26) 11.60 (4.35) 23.09 (5.33) 11.88 (2.77) 15.26 (1.52) 18.26 (6.00) 20.87 (4.27) 13.68 (2.10) 7.26 (3.91) 14.17 (6.83) 20.91 (8.85) Catcher-v0 19.76 (1.10) 13.91 (2.27) 18.27 (1.44) 13.89 (7.32) 8.07 (2.48) Average Rank 3 3.25 4 2.5 2 Note: When no more than one high-performing robot design is produced, diversity cannot be calculated. When this is the case across all repeated trials (e.g. BO on Walker-v0), the result is reported as “N/A”. When high-performing robots emerge in only one trial (e.g. RoboGAN in Carrier-v0), the standard deviation is unavailable and reported as “N/A”. Now we proceed to examine the ability of LLMs to transfer design experience across different tasks. To achieve this purpose, we introduce two more tasks: BridgeWalker-v0 and UpStepper- v0. Specifically, both BridgeWalker-v0 and UpStepper-v0 bear some resemblance to Walker-v0, but differ in their terrains: BridgeWalker-v0 involves locomotion on a soft rope-bridge, whereas UpStepper-v0 requires climbing stairs of varying lengths. The LLM is prompted to generate robot designs for each new task, given elite Walker-v0 robot designs. As shown in Figure 3(b)3, the zero- shot proposals by LLM outperform both randomly generated designs and elite Walker-v0 designs evolved by LASeR, in terms of accomplishing the new tasks. This serves as sound evidence that the LLM is not simply replicating examplars in its context, but rather assimilating design experience that is beneficial for new settings. This is largely owing to our incorporation of task-related metadata that provokes inter-task reasoning within the LLM. For illustration, Figure 3(a) demonstrates some insights that the LLM drew from Walker-v0 elites to transfer to BridgeWalker-v0. The zero-shot proposals are then leveraged as the initial population for further optimization. Figure 3(c-1) and 3(c-2) demonstrate that this informative initialization results in faster evolution than start- ing from scratch, and pulls away from baseline algorithms with even greater advantage. Also note that the zero-shot proposals for BridgeWalker-v0 turn out to be already near optimal before under- going marginal improvement with evolution. These promising results unprecedentedly uncover the possibility of generalizable evolutionary processes driven by LLMs and hopefully inspire closer in- vestigation in future work. Please note that while here we focus on intuitively similar task instances, we show in Appendix M that this is not strictly necessary for successful experience transfer. Published as a conference paper at ICLR 2025 Figure 3: Results of inter-task knowledge transfer. As shown in Table 4.3.1, the Diversity Reflection Mechanism fosters a robust increase in diversity compared to an ablated version. It is further demonstrated in Figure 4 that the exploratory behaviors led by DiRect also facilitate more efficient navigating of design spaces, leading to reduced sus- ceptibility to local optima and higher optimization efficiency. These results combine to underscore the distinct superiority of DiRect to yield dual benefits in optimization efficiency and diversity by exploiting the reasoning capabilities of LLMs. Table 2: Ablative results of diversity (reported as mean (std)) 23.09 (5.33) 20.87 (4.27) 20.91 (8.85) Catcher-v0 8.07 (2.48) 5.65 (1.11) (d) Catcher-v0 Figure 4: Ablative results of single-task optimization efficiency. The results of the ablated version are averaged across three repeated runs. (a) Example of design experience transferred from Walker-v0 to BridgeWalker-v0. (b) Zero-shot performance(c-2) UpStepper-v0(c-1) BridgeWalker-v002505007501000Number of Evaluations10.510.610.7Maximal Fitness02505007501000Number of Evaluations5.07.510.0Maximal Fitness02505007501000Number of Evaluations7.510.012.5Maximal Fitness02505007501000Number of Evaluations0.60.8Maximal FitnessLASeR LASeR (w/o DiRect) Published as a conference paper at ICLR 2025 ', 'original_lines': 'Figure 2: Comparative Results of Single-Task Optimization Table 1 further demonstrates the diversity of high-performing robot designs achieved by different methods. We observe that LASeR surpasses all baselines without exception. For visualizations of 2Specifically, we first calculate the 90% quantile of fitnesses obtained by all methods, and consider robot designs with fitness exceeding this threshold as high-performing. 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 02004006008001000Number of Evaluations10.5010.5510.6010.6510.70Maximal Fitness02505007501000Number of Evaluations24681012Maximal Fitness02004006008001000Number of Evaluations81012Maximal FitnessLASeR (ours) BO SE RoboGAN LLM-Tuner Under review as a conference paper at ICLR 2025 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 evolved robot designs please refer to Appendix K. We additionally compare the fitness performance of robot designs before and after being modified by DiRect, and find no significant difference (see Appendix F for quantitative results and examples of DiRect modifications). This suggests that our Diversity Reflection Mechanism indeed encourages the LLM to introduce variability into robot de- sign while keeping its functionality largely intact. All these results combine to prove the distinct advantage of DiRect to promote beneficial exploratory behaviors directed towards high-performing regions. It is worth noting that the Bayesian Optimization algorithm, known for a balance between exploration and exploitation in its acquisition function, actually compromises a great deal of opti- mization efficiency for exploration and fails to generate high-performing robots in many cases. On the contrary, we reshape the exploration-exploitation tradeoff of LLM-aided evolution to yield dual benefits in optimization efficiency and diversity. We performed additional comparison on Catcher- v0, one of the most challenging tasks in EvoGym, to further showcase the effectiveness of LASeR (see Appendix E). Table 1: Comparative Results of Diversity LASeR(ours) 5.40 (0.30) 19.21 (11.25) 20.77 (4.77) 8.35 (N/A) 15.84 (0.77) 20.33 (6.53) 22.11 (3.88) 11.31 (N/A) 6.61 (3.45) 16.75 (5.95) 27.56 (1.66) Note 1: Since we have three repeated experiments, the results are reported as “mean(standard deviation)”. The same is true for Table 2. Note 2: When no more than one high-performing robot design is produced, diversity cannot be calculated. When this is the case across all repeated experiments (e.g. BO on Walker-v0), the cell is filled with “N/A”. If this is the case for two repeated experiments (e.g. BO on Carrier-v0), the standard deviation is unavailable and only the mean is reported. Note 3: For separate results of edit distance and the number of high-performing designs, please refer to Appendix L. Now we proceed to explore the ability of LLMs to transfer design experience across different tasks. To achieve this purpose, we introduce two more tasks: BridgeWalker-v0 and UpStepper-v0. Specif- ically, both BridgeWalker-v0 and UpStepper-v0 bear some resemblance to Walker-v0, but differ in their terrains: BridgeWalker-v0 involves locomotion on a soft rope-bridge, whereas UpStepper-v0 requires climbing stairs of varying lengths. The LLM is prompted to generate robot designs for each new task, given elite Walker-v0 robot designs. As shown in Figure 3(b)3, the zero-shot pro- posals by LLM outperform both randomly generated designs and those provided elite Walker-v0 designs which have finished their evolving with LASeR, in terms of accomplishing the new tasks. This serves as sound evidence that the LLM is not simply replicating examplars in its context, but rather assimilating design experience that is beneficial for new settings. This is largely owing to our incorporation of task-related metadata that provokes inter-task reasoning within the LLM. For illus- tration, Figure 3(a) demonstrates some insights that the LLM drew from Walker-v0 elites to transfer to BridgeWalker-v0. The zero-shot proposals are then leveraged as the initial population for further optimization. Fig- ure 3(c-1) and 3(c-2) demonstrate that this informative initialization results in faster evolution than starting from scratch, and pulls away from baseline algorithms with even greater advantage. Also note that the zero-shot proposals for BridgeWalker-v0 turn out to be already near optimal before un- dergoing marginal improvement with evolution. These promising results unprecedentedly uncover the possibility of generalizable evolutionary processes driven by LLMs and hopefully inspire closer investigation in future work. Please note that while here we focused on intuitively similar task in- stances, we prove in Appendix O that this prior knowledge regarding inter-task relationships is not necessary for successful experience transfer. Under review as a conference paper at ICLR 2025 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 Figure 3: Effectiveness of Inter-Task Knowledge Transfer As shown in Table 2, the Diversity Reflection Mechanism fosters a robust increase in diversity com- pared to an ablated version. It is further demonstrated in Figure 4 that the exploratory behaviors led by DiRect also facilitate more efficient navigating of design spaces, leading to reduced susceptibility to local optima and higher optimization efficiency. These results combine to underscore the distinct superiority of DiRect to yield dual benefits in optimization efficiency and diversity by exploiting the reasoning capabilities of LLMs. Table 2: Ablative Results of Diversity 20.77 (4.77) 22.11 (3.88) 27.56 (1.66) Figure 4: Effectiveness of DiRect (a) Example of Design Experience Transferred from Walker-v0 to BridgeWalker-v0(b) Zero-Shot Performance(c-2) UpStepper-v0(c-1) BridgeWalker-v002004006008001000Number of Evaluations10.5010.5510.6010.6510.70Maximal Fitness02004006008001000Number of Evaluations6810Maximal Fitness02004006008001000Number of Evaluations81012Maximal FitnessLASeR LASeR (w/o DiRect) Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 25}, {'section': '4.3 ABLATION STUDIES', 'after_section': None, 'context_after': '4.3.3 ', 'paragraph_idx': 37, 'before_section': '4.3 ABLATION STUDIES', 'context_before': 'grounding in task-related background information, which potentially impedes their performances in real-world applications. We test this conjecture by removing descriptions of task objectives and simulation environment from our prompts, and see a significant performance drop (Figure 5(a)), ', 'modified_lines': 'hence justifying our prompt design. For finer-grained ablations on individual components of the prompt, please see Appendix K. ', 'original_lines': 'hence justifying our prompt design. For finer-grained ablations on individual components of our prompt, please see Appendix M. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 37}, {'section': '4.3 ABLATION STUDIES', 'after_section': '4.3 ABLATION STUDIES', 'context_after': '(a) Impact of task metadata ', 'paragraph_idx': 35, 'before_section': '4.3 ABLATION STUDIES', 'context_before': 'Previous work has shown that the temperature parameter of LLMs has an unignorable influence on evolutionary outcomes, with higher temperatures tending to yield better results (Pluhacek et al., ', 'modified_lines': '2024). However, we observe a reverse effect where a lower temperature turns out slightly more fa- vorable (Figure 5(b)). We suspect that this is partly due to the complexity within VSR design, which necessitates precise extrapolation from an ascending sequence of solutions. Any deviation could lead to substantial performance drops, outweighing the benefits of random exploration. Meanwhile, we note that the ablation studies with temperature as 1.5 fail similarity checks only about 70% as often as when temperature equals 0.7. In words, higher temperatures would lead to greater but inef- fective variability in candidate solutions so that they could bypass diversity reflection. These results suggest that lower output temperatures are required for our approach to work better. Additionally, we observe the same improvement resulted from more up-to-date LLMs as in past literature (Fig- ure 5(c)). This shows promise of robot design automation directly benefiting from better language models, which puts us in a strategic position to ride the wave of rapidly progressing LLMs. ', 'original_lines': '2024). However, we observe a reverse effect where a lower temperature turns out slightly more favorable (Figure 5(b)). We suspect that this is partly due to the complexity within VSR design, which necessitates precise extrapolation from an ascending sequence of solutions. Any deviation could lead to substantial performance drops, outweighing the benefits of random exploration. This again underscores the superiority of our proposed DiRect mechanism, which resorts to more edu- cated exploration strategies. Meanwhile, we note that the ablation studies with temperature as 1.5 fail similarity checks only about 70% as often as when temperature equals 0.7. In words, higher tem- peratures would lead to greater but ineffective variability in candidate solutions so that they could bypass diversity reflection. These results suggest that lower output temperatures are required for our approach to work better. Additionally, we observe the same improvement resulted from more up-to-date LLMs as in past literature (Figure 5(c)). This shows promise of robot design automation directly benefiting from better language models, which puts us in a strategic position to ride the wave of rapidly progressing LLMs. ', 'after_paragraph_idx': 35, 'before_paragraph_idx': 35}, {'section': '5 CONCLUDING REMARKS', 'after_section': None, 'context_after': '5 CONCLUDING REMARKS ', 'paragraph_idx': 42, 'before_section': None, 'context_before': '(c) Impact of LLM version ', 'modified_lines': 'Figure 5: Additional ablation studies on Carrier-v0. The results of the ablated versions are averaged across three repeated runs. ', 'original_lines': 'Figure 5: Additional ablation studies on Carrier-v0 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'ETHICS STATEMENT This work uses simulated task environments which have been commonly used in previous research of robot design automation and should not be regarded controversial. Our use of Large Language Models is strictly confined to simulated robot design generation without real-world deployment, and therefore does not involve any safety risks. REPRODUCIBILITY STATEMENT REFERENCES ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'provements in optimization efficiency and diversity. We additionally propose to ground robot design on rich task-related metadata and uncover the intriguing inter-task reasoning capabilities of LLMs to foster generalizable design processes across different applications. Our experiments with simulated ', 'modified_lines': 'voxel-based soft robots demonstrate superior performances of our approach compared to competi- tive baselines. Scaling up LASeR for multi-task optimization would hopefully further harness the inter-task reasoning abilities of LLMs to boost sample efficiency. It is also interesting to investigate how LLM-aided control strategies (Wang et al., 2023a) could be integrated into our framework, so that LLMs are not only responsible for action planning, but also enabled to design their own em- bodiments, hence exploiting the synergy between design and control. We leave these for our future work. For a more detailed discussion of limitations and open problems, please refer to Appendix N. 10 02505007501000Number of Evaluations810Maximal FitnessLASeRLASeR (w/o task metadata)02505007501000Number of Evaluations810Maximal FitnessLASeRLASeR (temperature=1)LASeR (temperature=1.5)02505007501000Number of Evaluations810Maximal FitnessLASeRLASeR (GPT-3.5) Published as a conference paper at ICLR 2025 Our code is readily available on GitHub. This work uses GPT-3-turbo and GPT-4o-mini, whose APIs are publicly accessible. However, due to the uncontrollable random generator seeds behind closed- source LLMs, experiments involving these models generally suffer from limited reproducibility (Huang et al., 2024a). Developing reproducible methods for API calls would significantly improve the replicability of research outcomes involving Large Language Models. ', 'original_lines': 'voxel-based soft robots demonstrate superior performances of our approach compared to competitive baselines. Scaling up LASeR for multi-task optimization would hopefully further harness the inter- task reasoning abilities of LLMs to boost sample efficiency. The recent advancements in prompt engineering, such as Chain-of-Thoughts (Wei et al., 2022) and Tree-of-Thoughts (Yao et al., 2024), also hold promise for further unleashing the potential of LLMs in robot design automation and war- rant further examination. Moreover, it is interesting to investigate how LLM-aided control strategies (Wang et al., 2023a) could be integrated into our framework, so that LLMs are not only responsible for action planning, but also enabled to design their own embodiments, hence exploiting the synergy between design and control. We leave these for our future work. For more detailed discussions of limitations and open problems for future research, please see Appendix P. 10 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 02004006008001000Number of Evaluations8910Maximal FitnessLASeRLASeR (w/o task metadata)02004006008001000Number of Evaluations78910Maximal FitnessLASeR (temperature=0.7)LASeR (temperature=1)LASeR (temperature=1.5)02004006008001000Number of Evaluations78910Maximal FitnessLASeR (GPT-4o-mini)LASeR (GPT-3.5) Under review as a conference paper at ICLR 2025 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 Our code is readily available on anonymous GitHub. This work uses GPT-3-turbo and GPT-4o- mini, whose APIs are publicly accessible. However, due to the uncontrollable random generator seeds behind close-source LLMs, experiments involving these models generally suffer from poor reproducibility (Huang et al., 2024a). Developing reproducible methods for API calls would signif- icantly improve the replicability of research outcomes involving Large Language Models. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Xingyu Wu, Sheng-hao Wu, Jibin Wu, Liang Feng, and Kay Chen Tan. Evolutionary computation in the era of large language model: Survey and roadmap. arXiv preprint arXiv:2401.10034, 2024. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Wang. Preco: Enhancing generalization in co-design of modular soft robots via brain-body pre- training. In Conference on Robot Learning, pp. 478–498. PMLR, 2023b. ', 'modified_lines': '', 'original_lines': 'Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837, 2022. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Haoran Ye, Jiarui Wang, Zhiguang Cao, Federico Berto, Chuanbo Hua, Haeyeon Kim, Jinkyoo Park, and Guojie Song. Large language models as hyper-heuristics for combinatorial optimization, 2024. URL https://arxiv.org/abs/2402.01145. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Chen. Large language models as optimizers, 2024. URL https://arxiv.org/abs/2309. 03409. ', 'modified_lines': '', 'original_lines': 'Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Ad- vances in Neural Information Processing Systems, 36, 2024. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.2 COMPARISON STUDIES', 'after_section': None, 'context_after': 'following components one at a time: (a) the description of the simulation engine; (b) the description of task objectives; (c) the just-ask query (or target fitness); in this case, the LLM is simply prompted to generated robot designs with higher fitness; and (d) the ascending ordering of elite design-fitness ', 'paragraph_idx': 28, 'before_section': None, 'context_before': 'We have demonstrated the indispensability of task-related metadata in Section 4.3.2. To further jus- tify our prompt design and to complement the intuitive explanations provided above, we conducted ', 'modified_lines': 'finer-grained ablation studies and the results are reported in Figure 18. Specifically, we remove the ', 'original_lines': 'finer-grained ablation studies and the results are reported in Figure 21. Specifically, we remove the ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 EXPERIMENTAL SETUPS', 'after_section': '4.1 EXPERIMENTAL SETUPS', 'context_after': 'ber of distinct designs into account. We hope our work could inspire future work to develop even more reasonable and hopefully universally acceptable metrics for diversity. According to the data released on LLM Leaderboard (https://artificialanalysis.ai/leaderboards/models), for GPT-4o-mini, the median rate of output token generation is 99.8 tokens per second, and the 130 API calls per generation, with each call involving approximately 180 output tokens (here we assume the worst case where each newly generated robot design triggers DiRect), this results in an overhead of around 5 minutes per generation, or 5 hours in total. ', 'paragraph_idx': 23, 'before_section': None, 'context_before': '• Morphological diversity is an important aspect for evaluating robot design algorithms, as diversified designs are crucial for ensuring the robustness of robotic systems in highly volatile environments. We pointed out the limitations of previous diversity measures in ', 'modified_lines': 'Appendix J and proposed to make a correction by taking both distinctiveness and the num- • Last but not least, while our work is based on simulation, we note that there is ongoing research on the realization of soft robotics in the physical world, using polymers with pneu- matic chambers (Kriegman et al., 2020b; Legrand et al., 2023) or even self-replicating cells (Kriegman et al., 2020a; 2021) and continually narrowing the sim-to-real gap. We believe that with the collective efforts of material scientists, computer scientists, (bio)mechanical engineers, etc., soft robotics would see rapid advances and finds its way to everyday life in the near future. O AN ANALYSIS OF COMPUTATIONAL EFFICIENCY latency (i.e., time to first token) is reported as 0.5 seconds. Given that LASeR makes an average of ', 'original_lines': 'Appendix L and proposed to make a correction by taking both distinctiveness and the num- Q AN ANALYSIS OF COMPUTATIONAL EFFICIENCY latency (i.e. time to first token) is reported as 0.5 seconds. Given that LASeR makes an average of ', 'after_paragraph_idx': 23, 'before_paragraph_idx': None}]
2025-03-01 06:05:05
ICLR.cc/2025/Conference
fJSw80QUUp
FVeWAXC0Je
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': '∗Corresponding authors. ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'shown great promise in minimizing reliance on handcrafted search heuristics, facilitating convenient problem specification in natural language and rendering evolutionary processes more interpretable. To date, they have showcased proficiency in classic optimization problems such as the Traveling ', 'modified_lines': 'Salesman Problem and numerical functions, (Liu et al., 2024a; Brahmachary et al., 2024; Huang ', 'original_lines': '', 'after_paragraph_idx': 3, 'before_paragraph_idx': 3}, {'section': 'Abstract', 'after_section': '1 INTRODUCTION', 'context_after': 'et al., 2024a), as well as real-world scenarios spanning code generation (Morris et al., 2024; Romera- Paredes et al., 2024), robotic control (Lange et al., 2024), protein design (Tran & Hy, 2024), etc. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Published as a conference paper at ICLR 2025 ', 'modified_lines': '', 'original_lines': 'Salesman Problem and numerical functions, (Liu et al., 2024a; Brahmachary et al., 2024; Huang ', 'after_paragraph_idx': 3, 'before_paragraph_idx': None}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '2 Published as a conference paper at ICLR 2025 soning, decision making, and generalization capabilities (Achiam et al., 2023; Touvron et al., 2023; Team et al., 2023; Team, 2023), which have sparked a flurry of research into exploiting them for op- timization problems (Huang et al., 2024b; Wu et al., 2024). By receiving history search trajectories ', 'paragraph_idx': 9, 'before_section': '2 RELATED WORK', 'context_before': 'Large Language Models as Evolutionary Search Operators. Large Language Models (LLMs) represent a class of deep generative neural networks comprising billions or trillions of parameters ', 'modified_lines': 'and pretrained on web-scale texual data. In recent years, LLMs have demonstrated impressive rea- ', 'original_lines': 'and pretrained on web-scale texual data. In recent years, LLMs have demonstrated impressive rea- ', 'after_paragraph_idx': None, 'before_paragraph_idx': 9}, {'section': '3.1 ALGORITHM FRAMEWORK', 'after_section': None, 'context_after': '3 ', 'paragraph_idx': 12, 'before_section': '3.1 ALGORITHM FRAMEWORK', 'context_before': 'As illustrated in Figure 1(a) and detailed by Algorithm 1 in Appendix P, we integrate an LLM into the bi-level optimization framework commonly employed in robot design automation. Specifically, the inner loop optimizes a controller for each robot morphology through reinforcement learning, ', 'modified_lines': 'with the resulting task performance serving as the fitness evaluation. The outer loop evolves a ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 12}, {'section': 'Abstract', 'after_section': None, 'context_after': 'population of robot morphologies by carrying out natural selection and generating new offspring solutions in each generation. Here, instead of traditional evolutionary algorithms (EAs) that rely on manually designed search heuristics for generating offspring, an LLM is properly prompted to be ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': ', ', 'modified_lines': '', 'original_lines': 'with the resulting task performance serving as the fitness evaluation. The outer loop evolves a ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '4.3 ABLATION STUDIES', 'after_section': '4.3 ABLATION STUDIES', 'context_after': 'Figure 5: Additional ablation studies on Carrier-v0. The results of the ablated versions are averaged across three repeated runs. 5 CONCLUDING REMARKS ', 'paragraph_idx': 35, 'before_section': '4.3 ABLATION STUDIES', 'context_before': '(a) Impact of task metadata (b) Impact of LLM temperature ', 'modified_lines': ' (c) Impact of LLM version ', 'original_lines': ' (c) Impact of LLM version ', 'after_paragraph_idx': 35, 'before_paragraph_idx': 35}, {'section': '5 CONCLUDING REMARKS', 'after_section': '5 CONCLUDING REMARKS', 'context_after': 'ETHICS STATEMENT This work uses simulated task environments which have been commonly used in previous research of robot design automation and should not be regarded controversial. Our use of Large Language Models is strictly confined to simulated robot design generation without real-world deployment, and therefore does not involve any safety risks. ', 'paragraph_idx': 41, 'before_section': '5 CONCLUDING REMARKS', 'context_before': 'bodiments, hence exploiting the synergy between design and control. We leave these for our future work. For a more detailed discussion of limitations and open problems, please refer to Appendix N. ', 'modified_lines': 'ACKNOWLEDGEMENTS This work is funded by National Natural Science Foundation of China (No.72371241), the MOE Project of Key Research Institute of Humanities and Social Sciences (22JJD110001), and Zhiqiang Foundation. The authors would like to thank all the anonymous reviewers for their valuable com- ments. 10 02505007501000Number of Evaluations810Maximal FitnessLASeRLASeR (w/o task metadata)02505007501000Number of Evaluations810Maximal FitnessLASeRLASeR (temperature=1)LASeR (temperature=1.5)02505007501000Number of Evaluations810Maximal FitnessLASeRLASeR (GPT-3.5) Published as a conference paper at ICLR 2025 ', 'original_lines': ' 10 02505007501000Number of Evaluations810Maximal FitnessLASeRLASeR (w/o task metadata)02505007501000Number of Evaluations810Maximal FitnessLASeRLASeR (temperature=1)LASeR (temperature=1.5)02505007501000Number of Evaluations810Maximal FitnessLASeRLASeR (GPT-3.5) Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': 42, 'before_paragraph_idx': 40}]
2025-03-01 10:14:30
ICLR.cc/2025/Conference
FVeWAXC0Je
t4ZD6x4VhP
[]
2025-03-05 08:52:58
ICLR.cc/2025/Conference
t4ZD6x4VhP
LHnCs2xGBW
[]
2025-03-30 13:57:03
ICLR.cc/2025/Conference
4Sr0Da32LV
oaZRId1dby
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'identifying the model that maximizes an evaluation score based on the diversity and quality of the generated data. However, such a best-model identification ap- proach overlooks the possibility that a mixture of available models can outper- ', 'modified_lines': 'form each individual model. In this work, we numerically show that a mixture of generative models on benchmark image datasets can indeed achieve a better eval- uation score (based on FID and KID scores), compared to the individual models. This observation motivates the development of efficient algorithms for selecting the optimal mixture of the models. To address this, we formulate a quadratic optimization problem to find an optimal mixture model achieving the maximum of kernel-based evaluation scores including kernel inception distance (KID) and R´enyi kernel entropy (RKE). To identify the optimal mixture of the models us- ing the fewest possible sample queries, we view the selection task as a multi- armed bandit (MAB) problem and propose the Mixture Upper Confidence Bound (Mixture-UCB) algorithm that provably converges to the optimal mixture of the in- volved models. More broadly, the proposed Mixture-UCB can be extended to op- timize every convex quadratic function of the mixture weights in a general MAB setting. We prove a regret bound for the Mixture-UCB algorithm and perform several numerical experiments to show the success of Mixture-UCB in finding the optimal mixture of text and image generative models. The project code is available at https://github.com/Rezaei-Parham/Mixture-UCB. ', 'original_lines': 'form each individual model. In this work, we explore the selection of a mixture of multiple generative models and formulate a quadratic optimization problem to find an optimal mixture model achieving the maximum of kernel-based evaluation scores including kernel inception distance (KID) and R´enyi kernel entropy (RKE). To identify the optimal mixture of the models using the fewest possible sample queries, we propose an online learning approach called Mixture Upper Confidence Bound (Mixture-UCB). Specifically, our proposed online learning method can be extended to every convex quadratic function of the mixture weights, for which we prove a concentration bound to enable the application of the UCB approach. We prove a regret bound for the proposed Mixture-UCB algorithm and perform several numerical experiments to show the success of the proposed Mixture-UCB method in finding the optimal mixture of text-based and image-based generative models. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'We perform several numerical experiments to test the application of our proposed Mixture-UCB approach in comparison to the Vanilla-UCB and One-Arm Oracle approaches that tend to generate samples from only one of the available generative models. Our numerical results indicate that the Mixture-UCB algorithms can generate samples with higher RKE diversity scores, and tends to gen- erate samples from a mixture of several generative models when applied to image-based generative models. Also, we test the performance of Mixture-UCB on the KID, Precision, and Density scores, ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'The rapid advancements in generative modeling have created a need for mechanisms to combine multiple well-trained generative models, each developed using different algorithms and architec- ', 'modified_lines': 'tures, into a single unified model. Consider m unconditional generative models G1, . . . , Gm, where each Gi represents a probability model PGi according to which new samples are generated. A com- mon approach for creating a unified model is to compute evaluation scores (e.g., the standard FID (Heusel et al., 2017) and KID (Bi´nkowski et al., 2018) scores) that quantify the diversity and fidelity of the generated data, followed by selecting the model PGi∗ with the best evaluated score. This best-score model selection strategy has been widely adopted for choosing generative models across various domains, including image, text, and video data generation. However, the model selection approach by identifying the score-maximizing model overlooks the possibility that a mixture of the generative models α1PG1 + · · · + αmPGm, where each sample is generated from a randomly-selected model with αi being the probability of selecting model Gi, can outperform every individual model. This motivates the following question: Can there be real-world settings where a non-degenerate mixture of some well-trained generative models obtain a better evaluation score compared to each individual model? Note that the standard FID and KID scores are convex functions of the generative model’s distribution, and thus they can be optimized by a non-degenerate mixture of the models. In this work, we numerically show that it is possible for a 1 Published as a conference paper at ICLR 2025 Figure 1: A mixture (right-most case) of FFHQ pre-trained generative models with weights (0.25,0.4,0.01,0.08,0.26) achieves better FID and KID scores compared to each of the five involved models. The mixture weights are computed using our proposed Mixture-UCB-OGD algorithm. mixture of real-world generative models to improve evaluation scores over the individual models. An example is shown in Figure 1, where we find a non-degenerate mixture of five generative models pre-trained on the FFHQ dataset1. As shown, assigning mixture weights (0.25, 0.4, 0.01, 0.08, 0.26) to the models results in a significantly better FID score 170.11 than the best individual FID 185.07. To understand the improvement achieved by the mixture model, we note that the FID and KID scores evaluate both the quality and diversity of the generated data. While the averaged quality of generated samples represents an expected value that is optimized by an individual model, the diversity of samples from a mixture of the models can significantly improve over the diversity of the individual models’ data. Figure 2 displays an illustrative example for this point, where we observe that the diversity of “red bird, cartoon style” samples generated by each of the three text-to-image models, is qualitatively and quantitatively2 lower than the diversity of their mixture. As a result, the improvement in the diversity of a mixture of generative models can result in an improved FID and KID evaluation scores, as we numerically observe in Figure 1. 1.1 COMPUTING OPTIMAL MIXTURES OF GENERATIVE MODELS: THE MIXTURE-UCB MULTI-ARMED BANDIT ALGORITHM Since the evaluation score of a mixture of generative models can improve over the scores of the individual models, a natural question is how to efficiently compute the weights of an optimal mixture of the models using the fewest possible samples from the models. Here, our goal is to minimize the number of sample generation queries from sub-optimal models, which will save the time and monetary costs of identifying the best model. To achieve this, we propose viewing the task as a multi-armed bandit (MAB) problem, in which every generative model represents an arm and our goal is to find the best mixture of the models with the optimal evaluation score. The MAB approach for selecting among generative models has been recently explored by Hu et al. (2024) applying the Upper Confidence Bound (UCB) algorithm to the FID score. This MAB-based model selection extends the online model selection methods for supervised models, including the successive halving strategy (Karnin et al., 2013; Jamieson & Talwalkar, 2016; Chen & Ghosh, 2024)3. However, in the existing MAB algorithms, the goal is to eventually converge to a single arm with the optimal score. Successive halving (Karnin et al., 2013; Jamieson & Talwalkar, 2016; Chen & Ghosh, 2024) and the UCB algorithm developed by Hu et al. (2024) will ultimately select only one generative model after a sufficient number of iterations. However, as we discussed earlier, the eval- uation scores of the generated data could be higher when the sample generation follows a mixture of models rather than a single model. This observation leads to the following task: Developing an MAB algorithm that finds the optimal mixture of the arms rather than the single best arm. 1The pre-trained generative models are downloaded from dgm-eval GitHub repository (Stein et al., 2023). 2We have evaluated the diversity scores RKE (Jalali et al., 2023) and Vendi (Dan Friedman & Dieng, 2023). 3Jamieson & Talwalkar (2016) focused on applying successive halving on hyperparameter optimization for supervised learning, whereas Chen & Ghosh (2024) focused on generative models using maximum mean discrepancy (Gretton et al., 2012) as the score. 2 LDM (G1)StyleGAN-XL (G2)StyleNAT (G5)Efficient-VDVAE (G3)InsGen (G4)25% G1+ 40% G2 +1% G3 + 8% G4 +26% G5 FID ↓189.88 ± 1.98 186.16 ± 2.75 185.07 ± 2.12 490.39 ± 4.38 278.24 ± 1.62 170.11 ± 1.93 KID ↓(×102)1.48 ± 0.021.36 ± 0.035.34 ± 0.05 2.29 ± 0.03 1.44 ± 0.03 1.33 ± 0.07 Published as a conference paper at ICLR 2025 Figure 2: Visual comparison of the diversity across individual arms and the optimal mixture for images generated using models Kandinsky 3, Stable Diffusion 3, and PixArt-α with the prompt “Red bird, cartoon style”. The mixture weights are computed via the Mixture-UCB-OGD method. In this work, we develop a general MAB method, which we call the Mixture-UCB algorithm, that can provably find the best mixture of the arms when the evaluation score is a quadratic function of the arms’ distribution, i.e. when it represents the average of scores assigned to the pairs of drawn samples. Formulating the optimization problem for a quadratic score function results in a quadratic online convex optimization problem that can be efficiently solved using the online gradient descent algorithm. More importantly, we establish a concentration bound for the quadratic function of the mixture weights, which enables us to extend the UCB algorithm to the Mixture-UCB method for the online selection of the mixture weights. For selecting mixtures of generative models using Mixture-UCB, we focus on evaluation scores that reduce to a quadratic function of the generative model’s distribution, including Kernel Inception Distance (KID) (Bi´nkowski et al., 2018), R´enyi Kernel Entropy (RKE) (Jalali et al., 2023) scores, as well as the quality-measuring Precision (Sajjadi et al., 2018; Kynk¨a¨anniemi et al., 2019) and Density (Naeem et al., 2020) scores, which are linear functions of the generative distribution. Among these scores, RKE provides a reference-free entropy function for assessing the diversity of generated data, making it suitable for quantifying the variety of generated samples. Our mixture-based MAB frame- work can therefore be applied to find the mixture model with the maximum RKE-based diversity score. Additionally, we consider a linear combination of RKE with the Precision, Density, or KID quality scores to find a mixture of models that offers the best trade-off between quality and diversity. ', 'original_lines': 'tures, into a single unified model. A common approach for creating such a unified model is to evaluate assessment scores that quantify the diversity and quality of the generated data and then select the model with the highest score. This best-model identification strategy has been widely adopted for the selection of generative models across various domains, including image, text, audio, and video generation. Figure 1: Visual comparison of the diversity across individual arms and the optimal mixture for images generated using the prompt “Dark green giraffe, detailed, cartoon style”. 1 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 PixArt-α31% Kandinsky 3 +54% Stable Diffusion 3 +15% PixArt-α2.81 ± 0.034.09 ± 0.062.77 ± 0.035.93 ± 0.075.72 ± 0.084.13 ± 0.063.95 ± 0.057.39 ± 0.06RKE ↑Vendi ↑Stable Diffusion 3Kandinsky 3 Under review as a conference paper at ICLR 2025 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 Existing model selection frameworks typically perform an offline selection, where they have access to a sufficiently large number of samples from each generative model and estimate the evaluation score based on these samples. However, in many practical scenarios, generating a large sample set from sub-optimal models can be computationally costly, especially if the evaluator can identify their lack of optimality using fewer samples. In such cases, the evaluator can adopt an online learning approach and frame the problem as a multi-armed bandit (MAB) task. In each round, we choose a model to generate one sample, where the choice of model is based on previous samples. This allows us to quickly identify obviously sub-optimal models and avoid them, reducing the cost of generating from sub-optimal models. We only assume black-box access to the generative models, making our setting applicable to devices with limited computational resources (which accesses the models remotely via online services, without running the models locally), and to proprietary models where only black-box access is available. An existing approach is successive halving (Karnin et al., 2013; Jamieson & Talwalkar, 2016; Chen & Ghosh, 2024),1 where the models are evaluated using a fixed budget, the worst half of the models are removed, and we repeat the process until one model is left. Also, the recent work by Hu et al. (2024) attempts to solve the online model selection problem using an upper confidence bound (UCB) method to identify the generative model with the highest evaluation score. The numerical results of Hu et al. (2024) indicate the effectiveness of MAB algorithms in reducing sample generation costs from sub-optimal models. On the other hand, when the model selection task is handled by an online learning algorithm, the algorithm may choose different models at different iterations, resulting in generated data that follow a mixture of the distributions of these generative models. Note that in a standard MAB algorithm, the goal is to eventually converge to a single arm. Successive halving (Karnin et al., 2013; Jamieson & Talwalkar, 2016; Chen & Ghosh, 2024) and the standard UCB algorithm adopted by Hu et al. (2024) will ultimately select only one generative model after a sufficient number of iterations. However, the diversity scores of the generated data could be higher when the sample generation follows a mixture of models rather than a single model. This observation leads to the following question: could the diversity of generated data be improved by applying MAB algorithms to multiple generative models, if the algorithm aims to find the best mixture of the models? In this work, we aim to address the above question by finding the optimal mixture (cid:80)m i=1 αiPi of m generative models with distributions P1, . . . , Pm, which would produce a higher evaluation score compared to each individual model. Specifically, we focus on addressing this task in an online learn- ing setting, where we pick a model to generate a sample at each round. To address this problem and develop an MAB algorithm to find the best mixture model, we concentrate on evaluation scores that are quadratic functions of the generated data. As we show in this work, formulating the optimization problem for a quadratic score function results in a quadratic online convex optimization problem that can be efficiently solved using the online gradient descent algorithm. More importantly, we establish a concentration bound for the quadratic function of the mixture weights, which enables us to extend the UCB algorithm for the online selection of the mixture weights. Specifically, we focus on evaluation scores that reduce to a quadratic function of the generative model’s distribution, including the kernel-based Maximum Mean Discrepancy (MMD) (Gretton et al., 2012), Kernel Inception Distance (KID) (Bi´nkowski et al., 2018) and R´enyi Kernel Entropy (RKE) (Jalali et al., 2023) scores, as well as the quality-measuring Precision (Sajjadi et al., 2018; Kynk¨a¨anniemi et al., 2019) and Density (Naeem et al., 2020) scores, which are linear functions of the generative distribution. Among these scores, RKE provides a reference-free entropy function for assessing the diversity of generated data, making it suitable for quantifying the variety of generated samples. Our mixture-based online learning framework can therefore be applied to find the mixture model with the maximum RKE-based diversity score. Additionally, we can consider a linear com- bination of RKE with the Precision, Density, or KID quality scores to identify a mixture of models that offers the best trade-off between quality and diversity. 1Jamieson & Talwalkar (2016) focused on applying successive halving on hyperparameter optimization for supervised learning, whereas Chen & Ghosh (2024) focused on generative models using maximum mean discrepancy (Gretton et al., 2012) as the score. 2 Under review as a conference paper at ICLR 2025 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 3}, {'section': '1.1 COMPUTING OPTIMAL MIXTURES OF GENERATIVE MODELS: THE MIXTURE-UCB', 'after_section': '1.1 COMPUTING OPTIMAL MIXTURES OF GENERATIVE MODELS: THE MIXTURE-UCB', 'context_after': '• Developing the Mixture-UCB-CAB and Mixture-UCB-OGD algorithms to solve the formulated • Presenting numerical results on the improvements in the diversity of generated data by the online 2 RELATED WORK Multi-Armed Bandit Algorithms. The Multi-Armed Bandit (MAB) problem is a foundational topic in reinforcement learning, where an agent aims to maximize rewards from multiple options (arms) with initially unknown reward distributions (Lai & Robbins, 1985; Thompson, 1933). The Another related reference is informational multi-armed bandits (Weinberger & Yemini, 2023), which 3 PRELIMINARIES We review several kernel-based performance metrics of generative models. 3.1 R ´ENYI KERNEL ENTROPY The R´enyi Kernel Entropy (Jalali et al., 2023) of the distribution P , which measures the diversity of the modes in P , is given by log(1/E Taking the exponential of the R´enyi Kernel Entropy, we have the RKE mode count 1/E[k2(X, X ′)]) (Jalali et al., 2023), which is an estimate of the number of modes. Maximizing the RKE mode count is equivalent to minimizing the following loss ', 'paragraph_idx': 12, 'before_section': None, 'context_before': 'iments, both implementations result in satisfactory results and can improve upon learning strategies tending to select only one generative model. Here is a summary of this work’s contributions: ', 'modified_lines': '• Studying the selection task for mixtures of multiple generative models to improve the evaluation scores of generated samples (Section 4). • Proposing an online learning multi-armed bandit framework to address the mixture selection task for quadratic score functions (Section 5). online learning problem and proving a regret bound for Mixture-UCB-CAB (Sections 5.1, 5.2). selection of a mixture of the generation models (Section 6, Appendix 8.4). 3 PixArt-α28% Kandinsky 3 +40% Stable Diffusion 3 +32% PixArt-α3.94 ± 0.024.90 ± 0.034.44 ± 0.028.21 ± 0.037.79 ± 0.036.49 ± 0.037.08 ± 0.0211.42 ± 0.04RKE ↑Vendi ↑Stable Diffusion 3Kandinsky 3 Published as a conference paper at ICLR 2025 Assessment of Generative Models. The evaluation of generative models has been extensively stud- ied, with a focus on both diversity and quality of generated images. Reference-free metrics such as R´enyi Kernel Entropy (RKE) (Jalali et al., 2023) and VENDI (Dan Friedman & Dieng, 2023; Ospanov et al., 2024) measure diversity without relying on ground-truth, while reference-based metrics such as Recall (Sajjadi et al., 2018; Kynk¨a¨anniemi et al., 2019) and Coverage (Naeem et al., 2020) assess diversity relative to real data. For image quality evaluation, Density and Precision metrics (Naeem et al., 2020; Kynk¨a¨anniemi et al., 2019) provide measures based on alignment with a reference distribution. The Wasserstein distance (Arjovsky et al., 2017) and Fr´echet Inception Distance (FID) (Heusel et al., 2017) approximate the distance between real and generated datasets, while Kernel Inception Distance (KID) (Bi´nkowski et al., 2018) uses squared maximum mean dis- crepancy for a kernel-based comparison of distributions. Wang et al. (2023) apply the KID score for the distributed evaluation of generative models. The novelty evaluation of generative models has also been studied in References (Han et al., 2022; Jiralerspong et al., 2023; Zhang et al., 2024b;a). Upper Confidence Bound (UCB) algorithm (Agrawal, 1995a; Auer, 2002; Bubeck et al., 2012) is a widely adopted method for addressing the MAB problem, where uncertainty about an arm’s reward is replaced by an optimistic estimate. In generative models, optimism-based bandits have been applied to efficiently identify models with optimal Fr´echet Inception Distance (FID) or Inception Score while minimizing data queries (Hu et al., 2024). A special case of MAB, the continuum- armed bandit (CAB) problem (Agrawal, 1995b), optimizes a function over continuous inputs, and has been applied to machine learning tasks such as hyperparameter optimization (Feurer & Hutter, 2019; Li et al., 2018). Recent research explores CABs under more general smoothness conditions like Besov spaces (Singh, 2021), while other works have focused on regret bounds and Lipschitz conditions (Kleinberg, 2004; Kleinberg et al., 2019; Bubeck et al., 2008). extends UCB to maximizing the Shannon entropy of a discrete distribution. In comparison, the al- gorithms in this paper can minimize the expectation of any quadratic positive-semidefinite function, which also covers the order-2 R´enyi entropy for discrete distributions. Since the generative models’ outputs are generally continuous, (Weinberger & Yemini, 2023) is not applicable to our setting. [k2(X, X ′)]), where k is a positive definite kernel.4 ', 'original_lines': '• Studying the selection task for mixtures of multiple generative models to improve the diversity of generated samples (Section 4). • Proposing an online learning framework to address the mixture selection task for quadratic score functions (Section 5). online learning problem (Sections 5.1, 5.2). • Proving a regret bound for Mixture-UCB-CAB which shows the convergence of Mixture-UCB- CAB to the optimal mixture (Theorem 2). selection of a mixture of the generation models (Section 6, Appendix 8.3). Assessment of Generative Models. The evaluation of generative models has been extensively studied, with a focus on both diversity and quality of generated images. Reference-free metrics such as R´enyi Kernel Entropy (RKE) (Jalali et al., 2023) and VENDI (Friedman & Dieng, 2023) measure diversity without relying on ground-truth, while reference-based metrics such as Recall (Kynk¨a¨anniemi et al., 2019) and Coverage (Naeem et al., 2020) assess diversity relative to real data. For image quality evaluation, Density and Precision metrics (Naeem et al., 2020; Kynk¨a¨anniemi et al., 2019) provide measures based on alignment with a reference distribution. The Wasserstein distance (Arjovsky et al., 2017) and Fr´echet Inception Distance (FID) (Heusel et al., 2018) ap- proximate the distance between real and generated datasets, while Kernel Inception Distance (KID) (Bi´nkowski et al., 2018) uses squared maximum mean discrepancy for a kernel-based comparison of distributions. Upper Confidence Bound (UCB) algorithm (Agrawal, 1995a; Auer, 2003; Bubeck & Cesa-Bianchi, 2012) is a widely adopted method for addressing the MAB problem, where uncertainty about an In generative models, optimism-based ban- arm’s reward is replaced by an optimistic estimate. dits have been applied to efficiently identify models with optimal Fr´echet Inception Distance (FID) or Inception Score while minimizing data queries (Hu et al., 2024). A special case of MAB, the continuum-armed bandit (CAB) problem (Agrawal, 1995b), optimizes a function over continuous inputs, and has been applied to machine learning tasks such as hyperparameter optimization (Feurer & Hutter, 2019; Li et al., 2018). Recent research explores CABs under more general smoothness conditions like Besov spaces (Singh, 2021), while other works have focused on regret bounds and Lipschitz conditions (Kleinberg, 2004; Kleinberg et al., 2019; Bubeck et al., 2008). extends UCB to maximizing the Shannon entropy of a discrete distribution, which is also a metric of diversity. In comparison, the algorithms in this paper can minimize the expectation of any quadratic positive semidefinite function, which not only covers the order-2 R´enyi entropy for discrete distribu- tions, but also includes the R´enyi Kernel Entropy applicable to continuous data. Since the outputs of generative models are generally continuous, (Weinberger & Yemini, 2023) is not applicable here. 3 Under review as a conference paper at ICLR 2025 [k2(X, X ′)]), where k is a positive definite kernel.2 ', 'after_paragraph_idx': 12, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'ˆK(x) := (cid:16) 1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'a=1 ', 'modified_lines': '', 'original_lines': '2The order-2 R´enyi entropy for discrete distributions is a special case by taking k(x, x′) = 1x=x′ . 4 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 ONLINE SELECTION OF OPTIMAL MIXTURES – MIXTURE MULTI-ARMED', 'after_section': None, 'context_after': '5.1 MIXTURE UPPER CONFIDENCE BOUND – CONTINUUM-ARMED BANDIT ', 'paragraph_idx': 30, 'before_section': '5 ONLINE SELECTION OF OPTIMAL MIXTURES – MIXTURE MULTI-ARMED', 'context_before': 'terms κ(x(s), x(t)) between samples at different rounds. Note that if κ(x, x′) = 0, then this reduces to the conventional multi-armed bandit setting by taking f (x) to be the negative reward of the sample x. In the following subsections, we will propose two new algorithms that are generalizations of the ', 'modified_lines': 'upper confidence bound (UCB) algorithm for multi-armed bandit (Agrawal, 1995a; Auer, 2002). ', 'original_lines': 'upper confidence bound (UCB) algorithm for multi-armed bandit (Agrawal, 1995a; Auer, 2003). ', 'after_paragraph_idx': None, 'before_paragraph_idx': 30}, {'section': '5.1 MIXTURE UPPER CONFIDENCE BOUND – CONTINUUM-ARMED BANDIT', 'after_section': '5.1 MIXTURE UPPER CONFIDENCE BOUND – CONTINUUM-ARMED BANDIT', 'context_after': 'Theorem 2 Suppose m ≥ 2, β ≥ 4. Consider bounded quadratic loss function (4) with κ being positive semidefinite. Let ˆP (T ) be the empirical distribution of the first T ≥ 2 samples x(T ) given ', 'paragraph_idx': 35, 'before_section': '5.1 MIXTURE UPPER CONFIDENCE BOUND – CONTINUUM-ARMED BANDIT', 'context_before': 'We now prove that Mixture-UCB-CAB gives an expected loss E[L( ˆP (T ))] that converges to the optimal loss minα L(α) by bounding their gap. This means that Mixture-UCB-CAB is a zero- ', 'modified_lines': 'regret strategy by treating E[L( ˆP (T ))] − minα L(α) as the average regret per round.7 The proof is given in Appendix 8.2. ', 'original_lines': 'regret strategy by treating E[L( ˆP (T ))] − minα L(α) as the average regret per round.5 The proof is given in Appendix 8.4. ', 'after_paragraph_idx': 36, 'before_paragraph_idx': 35}, {'section': '5.1 MIXTURE UPPER CONFIDENCE BOUND – CONTINUUM-ARMED BANDIT', 'after_section': '5.1 MIXTURE UPPER CONFIDENCE BOUND – CONTINUUM-ARMED BANDIT', 'context_after': 'The main difference between Mixture-UCB-CAB and conventional UCB is that we choose a mixture of arms in (6) given by the probability vector α, instead of a single arm. A more straightforward ', 'paragraph_idx': 37, 'before_section': '5.1 MIXTURE UPPER CONFIDENCE BOUND – CONTINUUM-ARMED BANDIT', 'context_before': 'When κ(x, x′) = 0, Mixture-UCB-CAB reduces to the conventional UCB, and Theorem 2 coincides with the O((cid:112)(m log T )/T ) distribution-free bound on the regret per round of conventional UCB ', 'modified_lines': '(Bubeck et al., 2012). Since there is a Ω((cid:112)m/T ) minimax lower bound on the regret per round even for conventional multi-armed bandit without the quadratic kernel term (Bubeck et al., 2012, Theorem 3.4), Theorem 2 is tight up to a logarithmic factor. ', 'original_lines': '(Bubeck & Cesa-Bianchi, 2012). Since there is a Ω((cid:112)m/T ) minimax lower bound on the regret per round even for conventional multi-armed bandit without the quadratic kernel term (Bubeck & Cesa-Bianchi, 2012, Theorem 3.4), Theorem 2 is tight up to a logarithmic factor. ', 'after_paragraph_idx': 38, 'before_paragraph_idx': 37}, {'section': 'Abstract', 'after_section': None, 'context_after': '0 and f (x) = −r(x) where r(x) is the reward of the sample x, i.e., the loss L(P ) = EX∼P [f (X)] is linear, T (E[L( ˆP (T ))] − minα L(α)) = T maxi∈[m] EX∼Pi [r(X)] − E[(cid:80)T t=1 r(x(t))] indeed reduces to the conventional notion of regret. So R can be regarded as the quadratic generalization of regret. Algorithm 2 Mixture-UCB-OGD 1: Input: m generative arms, number of rounds T ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'pulling the best single arm instead of the optimal mixture. Vanilla-UCB will be used as a baseline to be compared with Mixture-UCB-CAB, and another new algorithm presented in the next section. ', 'modified_lines': 'Mixture-UCB-CAB can be extended to the Sparse-Mixture-UCB-CAB algorithm which eventually select only a small number of models. This can be useful if there is a subscription cost for each model. Refer to Appendix 8.3 for discussions. 5.2 MIXTURE UPPER CONFIDENCE BOUND – ONLINE GRADIENT DESCENT We present an alternative to Mixture-UCB-CAB, called the mixture upper confidence bound – online gradient descent (Mixture-UCB-OGD) algorithm, inspired by the online gradient descent algorithm (Shalev-Shwartz et al., 2012). It also has a parameter β > 1. Refer to Algorithm 2. Mixture-UCB-CAB and Mixture-UCB-OGD can both be regarded as generalizations of the origi- nal UCB algorithm, in the sense that they reduce to UCB when κ(x, x′) = 0. If we remove the ˆK(x)n(t) term in (8), then Mixture-UCB-OGD becomes the same as UCB. 2 t 6Theorem 1 holds for a fixed α. A worst-case bound that simultaneously holds for every α is in Lemma 1. 7To justify calling R := E[L( ˆP (T ))] − minα L(α) the average regret per round, note that when κ(x, x′) = 7We may also consider the scenario where each pull gives a batch of l samples instead of only one sample. In this case, we will have x b,n , . . . , x (t−1) b +1 b,n (t−1) b +l ∼ Pb and n(t) b = n(t−1) b + l. 7 Published as a conference paper at ICLR 2025 ', 'original_lines': '5To justify calling R := E[L( ˆP (T ))] − minα L(α) the average regret per round, note that when κ(x, x′) = 6 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Both Mixture-UCB-CAB and Mixture-UCB-OGD attempt to make the “proportion vector” n(t)/t (note that n(t) i /t is the proportion of samples from model i) approach the optimal mixture α∗ that ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'b +1 ∼ Pb. ', 'modified_lines': '', 'original_lines': 'Mixture-UCB-CAB can be extended to the Sparse-Mixture-UCB-CAB algorithm which eventually select only a small number of models. This can be useful if there is a subscription cost for each model. Refer to Appendix 8.2 for discussions. 5.2 MIXTURE UPPER CONFIDENCE BOUND – ONLINE GRADIENT DESCENT We present an alternative to Mixture-UCB-CAB, called the mixture upper confidence bound – online gradient descent (Mixture-UCB-OGD) algorithm, inspired by the online gradient descent algorithm (Shalev-Shwartz et al., 2012). It also has a parameter β > 1. Refer to Algorithm 2. Mixture-UCB-CAB and Mixture-UCB-OGD can both be regarded as generalizations of the origi- nal UCB algorithm, in the sense that they reduce to UCB when κ(x, x′) = 0. If we remove the ˆK(x)n(t) term in (8), then Mixture-UCB-OGD becomes the same as UCB. 2 t ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.2 MIXTURE UPPER CONFIDENCE BOUND – ONLINE GRADIENT DESCENT', 'after_section': '5.2 MIXTURE UPPER CONFIDENCE BOUND – ONLINE GRADIENT DESCENT', 'context_after': 'Theorem 2 seems to be difficult to derive, and is left for future research. 6 NUMERICAL RESULTS ', 'paragraph_idx': 45, 'before_section': '5.2 MIXTURE UPPER CONFIDENCE BOUND – ONLINE GRADIENT DESCENT', 'context_before': 'An advantage of Mixture-UCB-OGD is that the computation of gradient (8) is significantly faster than the quadratic program (6) in Mixture-UCB-CAB. The running time complexity of Mixture- ', 'modified_lines': 'UCB-OGD is O(T 2 + T m2).8 Nevertheless, a regret bound for Mixture-UCB-OGD similar to ', 'original_lines': 'UCB-OGD is O(T 2 + T m2).6 Nevertheless, a regret bound for Mixture-UCB-OGD similar to ', 'after_paragraph_idx': 45, 'before_paragraph_idx': 45}, {'section': 'Abstract', 'after_section': None, 'context_after': 'each arm. The number of chosen samples varies based on the experiments. This is an unrealistic setting that only serves as a theoretical upper bound of the performance of any online algorithm. A realistic algorithm that performs close to the mixture oracle would be almost optimal. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '• Mixture Oracle. In the mixture oracle algorithm (Section 5), an oracle tells us the optimal mixture α∗ in advance, and we pull arms randomly according to this distribution. The optimal mixture is calculated by solving the quadratic optimization in Section 4 on a large number of samples for ', 'modified_lines': '', 'original_lines': ' 5We may also consider the scenario where each pull gives a batch of l samples instead of only one sample. In this case, we will have x b,n (t−1) b , . . . , x b,n +1 (t−1) b ∼ Pb and n(t) b = n(t−1) b + l. +l 6To update ˆK(x(t)) after a new sample x′ is obtained, we only need to compute κ(x, x′) for each existing sample x, and add their contributions to the corresponding entries in ˆK(x(t)), requiring a computational time that is linear with the number of existing samples. 7 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 Under review as a conference paper at ICLR 2025 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 (a) FFHQ generators (b) LSUN-Bedroom generators (c) FFHQ truncated generators Figure 2: Performance comparison of online algorithms for the KID metric across FFHQ, LSUN- Bedroom, and FFHQ Truncated generators. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6 NUMERICAL RESULTS', 'after_section': '6 NUMERICAL RESULTS', 'context_after': '• Mixture-UCB-CAB. The mixture upper confidence bound – continuum-armed bandit algorithm proposed in Section 5.1. • Mixture-UCB-OGD. The mixture upper confidence bound – online gradient descent algorithm proposed in Section 5.2. 6.1 OPTIMAL MIXTURE FOR DIVERSITY AND QUALITY VIA KID We conducted three experiments to evaluate our method using the Kernel Inception Distance (KID) metric. In the first experiment, we used five distinct generative models: LDM (Rombach et al., images centered on eight randomly selected points, using StyleGAN2-ADA (Karras et al., 2020), better KID scores compared to individual models. Additionally, the two Mixture-UCB algorithms consistently outperform the baselines. ', 'paragraph_idx': 46, 'before_section': '6 NUMERICAL RESULTS', 'context_before': 'a baseline for the purpose of comparison. ', 'modified_lines': '• Successive Halving. The Success Halving algorithm (Karnin et al., 2013; Jamieson & Talwalkar, 2016; Chen & Ghosh, 2024) which serves as a second baseline for comparison. 8To update ˆK(x(t)) after a new sample x′ is obtained, we only need to compute κ(x, x′) for each existing sample x, and add their contributions to the corresponding entries in ˆK(x(t)), requiring a computational time that is linear with the number of existing samples. 8 Published as a conference paper at ICLR 2025 Experiments Setup. We used DINOv2-ViT-L/14 (Oquab et al., 2023) for image feature extrac- tion, as recommended in (Stein et al., 2023), and RoBERTa (Liu et al., 2019) as the text encoder. Detailed explanation of the setup for each experiment is presented in Section 8.4. (a) FFHQ generators (b) LSUN-Bedroom generators (c) FFHQ truncated generators Figure 3: Performance comparison of online algorithms for the KID metric across FFHQ, LSUN- Bedroom, and FFHQ Truncated generators. 2022), StyleGAN-XL (Sauer et al., 2022), Efficient-VDVAE (Hazami et al., 2022), InsGen (Yang et al., 2021), and StyleNAT (Walton et al., 2022), all trained on the FFHQ dataset (Karras et al., 2019). In the second experiment, we used generated images from four models9: StyleGAN (Karras et al., 2019), Projected GAN (Sauer et al., 2021), iDDPM (Nichol & Dhariwal, 2021), and Un- leashing Transformers (Bond-Taylor et al., 2022), all trained on the LSUN-Bedroom dataset (Yu et al., 2015). This experiment followed a similar setup to the first. In the final experiment, we em- ployed the truncation method (Marchesi, 2017; Karras et al., 2019) to generate diversity-controlled also trained on the FFHQ dataset. Figure 3 demonstrates that the mixture of generators achieves ', 'original_lines': 'Experiments Setup. We used DINOv2-ViT-L/14 (Oquab et al., 2024) for image feature extrac- tion, as recommended in (Stein et al., 2023), and utilized RoBERTa (Liu et al., 2019) as the text encoder. Detailed explanation of the setup for each experiment is presented in Section 8.3. 2022), StyleGAN-XL (Sauer et al., 2022), Efficient-vdVAE (Hazami et al., 2022), InsGen (Yang et al., 2021), and StyleNAT (Walton et al., 2023), all trained on the FFHQ dataset (Karras et al., 2019b). In the second experiment, we used generated images from four models7: StyleGAN (Kar- ras et al., 2019a), Projected GAN (Sauer et al., 2021b), iDDPM (Nichol & Dhariwal, 2021), and Unleashing Transformers (Bond-Taylor et al., 2021), all trained on the LSUN-Bedroom dataset (Yu et al., 2016). This experiment followed a similar setup to the first. In the final experiment, we em- ployed the truncation method (Marchesi, 2017; Karras et al., 2019b) to generate diversity-controlled also trained on the FFHQ dataset. Figure 2 demonstrates that the mixture of generators achieves ', 'after_paragraph_idx': 46, 'before_paragraph_idx': 46}, {'section': '6.2 OPTIMAL MIXTURE FOR DIVERSITY VIA RKE', 'after_section': '6.2 OPTIMAL MIXTURE FOR DIVERSITY VIA RKE', 'context_after': 'Synthetic Unconditional Generative Models We conduct two experiments on diversity-limited generative models. First, we used eight center points with a truncation value of 0.3 to generate images using StyleGAN2-ADA, trained on the FFHQ dataset. In the second experiment, we applied PixArt-α (Chen et al., 2023a), and Stable Diffusion XL—to generate images of the object “Sofa”. Shepherd, and Poodle, respectively. This illustrates the challenge of generating diverse object types (a) Dog breed generators ', 'paragraph_idx': 51, 'before_section': '6.2 OPTIMAL MIXTURE FOR DIVERSITY VIA RKE', 'context_before': 'mixing the models on the diversity and the advantage of our algorithms Mixture-UCB-CAB and Mixture-UCB-OGD. The score in the plots is the RKE Mode Count, written as RKE for brevity. ', 'modified_lines': 'the same model, trained on the AFHQ Cat dataset (Choi et al., 2020), with a truncation value of 0.4. As shown in Figure 7, the optimal mixture and our algorithms consistently achieve higher RKE scores. The increase in diversity is visually depicted in Figures 6 and 8. Text to Image Generative Models We used Stable Diffusion XL (Podell et al., 2024) with spe- cific prompts to create three car image generators with distinct styles: realistic, surreal, and cartoon. In the second experiment, recognizing the importance of diversity in generative models for design tasks, we used five models—FLUX.1-Schnell (Labs, 2024), Kandinsky 3.0 (Vladimir et al., 2024), 9FFHQ and LSUN-Bedroom datasets were downloaded from the dgm-eval repository (Stein et al., 2023) (licensed under MIT license): https://github.com/layer6ai-labs/dgm-eval. 9 200040006000Step1.41.61.8KID×10−2Mixture OracleOne-Arm OracleVanilla-UCBMixture-UCB-CABMixture-UCB-OGDSuccessive Halving200040006000Step1.51.82.12.4KID×10−2Mixture OracleOne-Arm OracleVanilla-UCBMixture-UCB-CABMixture-UCB-OGDSuccessive Halving0200400Step123KID×10−1Mixture OracleOne-Arm OracleVanilla-UCBMixture-UCB-CABMixture-UCB-OGDSuccessive Halving Published as a conference paper at ICLR 2025 In a similar manner, we generated red bird images using Kandinsky 3.0, Stable Diffusion 3 (Esser et al., 2024), and PixArt-α, as shown in Figure 2.. Finally, in the third experiment, we used Sta- ble Diffusion XL to simulate models generating images of different dog breeds: Bulldog, German with text-to-image models. Figure 10 demonstrates the impact of using a mixture of models in the first and third experiments. The improvement in diversity is evident visually and quantitatively, as shown by the RKE scores. Our online algorithms consistently generate more diverse samples than others, as illustrated in Figure 4. ', 'original_lines': '7FFHQ and LSUN-Bedroom datasets were downloaded from the dgm-eval repository (Stein et al., 2023) (licensed under MIT license): https://github.com/layer6ai-labs/dgm-eval. 8 200040006000Step1.301.351.401.451.501.551.60KID×10−2Mixture OracleOne-Arm OracleVanilla-UCBMixture-UCB-CABMixture-UCB-OGD200040006000Step1.41.51.61.71.81.9KID×10−2Mixture OracleOne-Arm OracleVanilla-UCBMixture-UCB-CABMixture-UCB-OGD0100200300400Step0.51.01.52.02.53.0KID×10−1Mixture OracleOne-Arm OracleVanilla-UCBMixture-UCB-CABMixture-UCB-OGD Under review as a conference paper at ICLR 2025 (a) FFHQ truncated generators (b) AFHQ truncated generators Figure 3: Performance comparison of online algorithms based on the RKE metric for Simulator Unconditional Generative Models. the same model, trained on the AFHQ Cat dataset (Choi et al., 2020), with a truncation value of 0.4. As shown in Figure 3, the mixture achieves a higher RKE score, and our algorithms consistently give a higher RKE value. The increase in diversity is visually depicted in Figures 7 and 8. Figure 4: Visual comparison of the diversity across individual arms and the optimal mixture for Dog Breed Generators and Style-Specific Generators. Text to Image Generative Models We used Stable Diffusion XL (Podell et al., 2023) with specific prompts to create three car image generators with distinct styles: realistic, surreal, and cartoon. In the second experiment, recognizing the importance of diversity in generative models for design tasks, we used five models—FLUX.1-Schnell (Lab, 2024), Kandinsky 3.0 (Arkhipkin et al., 2024), In a similar manner, we generated green giraffe images using Kandinsky 3.0, Stable Diffusion 3 (Esser et al., 2024), and PixArt-α, as shown in Figure 1.. Finally, in the third experiment, we used Stable Diffusion XL to simulate models generating images of different dog breeds: Bulldog, German with text-to-image models. Figure 4 illustrates the impact of using a mixture of models in the first and third experiments. The improvement in diversity is evident both visually and quantitatively, as reflected in the RKE Scores. As shown in Figure 5, our online algorithms consistently outperform others in generating more diverse samples. 9 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 050010001500Step0.500.751.001.251.501.75RKE×101Mixture OracleOne-Arm OracleVanilla-UCBMixture-UCB-CABMixture-UCB-OGD02004006008001000Step0.81.01.21.41.6RKE×101Mixture OracleOne-Arm OracleVanilla-UCBMixture-UCB-CABMixture-UCB-OGDGenerative Model 1Generative Model 2Generative Model 3Optimal Mixtureweights = (0.33-0.31-0.36)1.51 ± 0.01RKE ±1.48 ± 0.021.64 ± 0.013.04 ± 0.02weights = (0.66-0.26-0.07)7.86 ± 0.164.83 ± 0.085.15 ± 0.109.19 ± 0.14RKE ± Under review as a conference paper at ICLR 2025 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 ', 'after_paragraph_idx': 51, 'before_paragraph_idx': 50}, {'section': '7 CONCLUSION', 'after_section': None, 'context_after': '6.3 OPTIMAL MIXTURE FOR DIVERSITY AND QUALITY VIA RKE AND PRECISION/DENSITY ', 'paragraph_idx': 59, 'before_section': None, 'context_before': '(c) Style-specific generators ', 'modified_lines': 'Figure 4: Performance comparison of online algorithms using RKE score of T2I generative models. ', 'original_lines': 'Figure 5: Performance comparison of online algorithms using the RKE metric for text-to-image generative models. Text Generative Models We utilized the OpenLLMText dataset (Chen et al., 2023b), which com- prises 60,000 human texts rephrased paragraph by paragraph using the GPT2-XL (Radford et al., 2019), LLaMA-7B (Touvron et al., 2023), and PaLM (Chowdhery et al., 2022) models. To extract textual features, we employed the RoBERTa Text Encoder. As shown in Figure 11a in Section 8.3.2, the results demonstrate the advantage of our online algorithms, suggesting that our method applies not only to image generators but also to text generators. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6.3 OPTIMAL MIXTURE FOR DIVERSITY AND QUALITY VIA RKE AND PRECISION/DENSITY', 'after_section': '6.3 OPTIMAL MIXTURE FOR DIVERSITY AND QUALITY VIA RKE AND PRECISION/DENSITY', 'context_after': 'optimal mixtures with higher diversity/quality score. (a) RKE w/Precision (b) RKE w/Density sion and RKE with Density metrics. 7 CONCLUSION ', 'paragraph_idx': 56, 'before_section': '6.3 OPTIMAL MIXTURE FOR DIVERSITY AND QUALITY VIA RKE AND PRECISION/DENSITY', 'context_before': 'well as RKE and Density (Naeem et al., 2020). We conduct experiments in which quality is a key consideration. We use four arms: three are StyleGAN2-ADA models trained on the FFHQ dataset, each generating images with a truncation of 0.3 around randomly selected center points. The fourth ', 'modified_lines': 'model is StyleGAN2-ADA trained on CIFAR-10 (Krizhevsky et al., 2009). The FFHQ dataset is used as the reference dataset. Figures 5 and 13 demonstrate the ability of our algorithms in finding Figure 5: Performance comparison of online algorithms using the combination of RKE with Preci- ', 'original_lines': 'model is StyleGAN2-ADA trained on CIFAR-10 (Krizhevsky & Hinton, 2009). The FFHQ dataset is used as the reference dataset. Figures 6 and 12 demonstrate the ability of our algorithms in finding Figure 6: Performance comparison of online algorithms using the combination of RKE with Preci- ', 'after_paragraph_idx': 56, 'before_paragraph_idx': 56}, {'section': '1.1 COMPUTING OPTIMAL MIXTURES OF GENERATIVE MODELS: THE MIXTURE-UCB', 'after_section': None, 'context_after': '10 REFERENCES Rajeev Agrawal. Sample mean based index policies by O(log n) regret for the multi-armed bandit problem. Advances in applied probability, 27(4):1054–1078, 1995a. Mikołaj Bi´nkowski, Danica J Sutherland, Michael Arbel, and Arthur Gretton. Demystifying MMD GANs. In International Conference on Learning Representations, 2018. leashing transformers: Parallel token prediction with discrete absorbing diffusion for fast high- Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Luming Chen and Sujit K Ghosh. Fast model selection and hyperparameter tuning for generative models. Entropy, 26(2):150, 2024. Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam 11 Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas M¨uller, Harry Saini, Yam Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Sch¨olkopf, and Alexander Smola. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723–773, 2012. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Mohammad Jalali, Cheuk Ting Li, and Farzan Farnia. An information-theoretic evaluation of gen- erative models in learning multi-modal distributions. Advances in Neural Information Processing ', 'paragraph_idx': 13, 'before_section': None, 'context_before': 'fact that a mixture of generative models could achieve a higher score compared to each individual model. We proposed the Mixture-UCB-CAB and Mixture-UCB-OGD online learning algorithms to find the optimal mixture. Our experiments suggest the usefulness of the algorithm in improving ', 'modified_lines': 'the performance scores over individual arms. Extending the algorithm to conditional and prompt- guided generative models is a relevant topic for future exploration. Also, the theoretical analysis of the regret of Mixture-UCB-OGD and characterization of conditions under which a mixture can or cannot improve the scores over the best single arm will be interesting topics for future studies. 0100200Step1.01.52.02.53.0RKEMixture OracleOne-Arm OracleVanilla-UCBMixture-UCB-CABMixture-UCB-OGDSuccessive Halving200400600Step5.56.06.57.07.5RKEMixture OracleOne-Arm OracleVanilla-UCBMixture-UCB-CABMixture-UCB-OGDSuccessive Halving250500750Step6789RKEMixture OracleOne-Arm OracleVanilla-UCBMixture-UCB-CABMixture-UCB-OGDSuccessive Halving0100020003000Step2.753.003.253.503.75RKE - λ Precision×10−1Mixture OracleOne-Arm OracleVanilla-UCBMixture-UCB-CABMixture-UCB-OGDSuccessive Halving0100020003000Step1.01.21.41.6RKE - λ Density×10−1Mixture OracleOne-Arm OracleVanilla-UCBMixture-UCB-CABMixture-UCB-OGDSuccessive Halving Published as a conference paper at ICLR 2025 ACKNOWLEDGMENTS The work of Cheuk Ting Li is partially supported by two grants from the Research Grants Council of the Hong Kong Special Administrative Region, China [Project No.s: CUHK 24205621 (ECS), CUHK 14209823 (GRF)]. The work of Farzan Farnia is partially supported by a grant from the Re- search Grants Council of the Hong Kong Special Administrative Region, China, Project 14209920, and is partially supported by CUHK Direct Research Grants with CUHK Project No. 4055164 and 4937054. Finally, the authors would like to thank the anonymous reviewers for their constructive feedback and suggestions. Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Ale- man, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. Rajeev Agrawal. The continuum-armed bandit problem. SIAM journal on control and optimization, 33(6):1926–1951, 1995b. Martin Arjovsky, Soumith Chintala, and L´eon Bottou. Wasserstein generative adversarial networks. In International conference on machine learning, pp. 214–223. PMLR, 2017. Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research, 3(Nov):397–422, 2002. Sam Bond-Taylor, Peter Hessey, Hiroshi Sasaki, Toby P Breckon, and Chris G Willcocks. Un- resolution image generation from vector-quantized codes. In European Conference on Computer Vision, pp. 170–188. Springer, 2022. S´ebastien Bubeck, Gilles Stoltz, Csaba Szepesv´ari, and R´emi Munos. Online optimization in x- armed bandits. volume 21, 2008. S´ebastien Bubeck, Nicolo Cesa-Bianchi, et al. Regret analysis of stochastic and nonstochastic multi- armed bandit problems. Foundations and Trends® in Machine Learning, 5(1):1–122, 2012. Kwok, Ping Luo, Huchuan Lu, et al. Pixart-α: Fast training of diffusion transformer for photore- alistic text-to-image synthesis. arXiv preprint arXiv:2310.00426, 2023a. Yutian Chen, Hao Kang, Vivian Zhai, Liangze Li, Rita Singh, and Bhiksha Raj. Token prediction as implicit classification to identify llm-generated text. arXiv preprint arXiv:2311.08723, 2023b. for multiple domains. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8188–8197, 2020. Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240): 1–113, 2023. Dan Dan Friedman and Adji Bousso Dieng. The vendi score: A diversity evaluation metric for machine learning. Transactions on machine learning research, 2023. Published as a conference paper at ICLR 2025 Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. In Forty-first international conference on machine learning, 2024. Matthias Feurer and Frank Hutter. Hyperparameter optimization. Springer International Publishing, 2019. Jiyeon Han, Hwanil Choi, Yunjey Choi, Junho Kim, Jung-Woo Ha, and Jaesik Choi. Rarity score: A new metric to evaluate the uncommonness of synthesized images. arXiv preprint arXiv:2206.08549, 2022. Louay Hazami, Rayhane Mama, and Ragavan Thurairatnam. Efficientvdvae: Less is more. arXiv preprint arXiv:2203.13751, 2022. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. Xiaoyan Hu, Ho-fung Leung, and Farzan Farnia. An optimism-based approach to online evaluation of generative models. arXiv preprint arXiv:2406.07451, 2024. ', 'original_lines': 'the performance scores over individual arms. Extending the algorithm to conditional and text-based generative models is a topic for future exploration. In addition, the application of the algorithm to other data domains, including text, audio, and video, is an interesting future direction. 050100150200250Step1.01.52.02.53.0RKEMixture OracleOne-Arm OracleVanilla-UCBMixture-UCB-CABMixture-UCB-OGD200400600Step6.06.57.07.5RKEMixture OracleOne-Arm OracleVanilla-UCBMixture-UCB-CABMixture-UCB-OGD200400600800Step6.06.57.07.58.08.59.0RKEMixture OracleOne-Arm OracleVanilla-UCBMixture-UCB-CABMixture-UCB-OGD0100020003000Step3.03.23.43.6RKE-Precision Mixture×10−1Mixture OracleOne-Arm OracleVanilla-UCBMixture-UCB-CABMixture-UCB-OGD0100020003000Step1.01.21.41.6RKE-Density Mixture×10−1Mixture OracleOne-Arm OracleVanilla-UCBMixture-UCB-CABMixture-UCB-OGD Under review as a conference paper at ICLR 2025 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 Rajeev Agrawal. The continuum-armed bandit problem. SIAM Journal on Control and Optimization, 33(6):1926–1951, 1995b. doi: 10.1137/S0363012992237273. URL https://doi.org/10. 1137/S0363012992237273. Martin Arjovsky, Soumith Chintala, and L´eon Bottou. Wasserstein generative adversarial net- works. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Con- ference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 214–223. PMLR, 06–11 Aug 2017. URL https://proceedings.mlr.press/v70/ arjovsky17a.html. Vladimir Arkhipkin, Andrei Filatov, Viacheslav Vasilev, Anastasia Maltseva, Said Azizov, Igor Pavlov, Julia Agafonova, Andrey Kuznetsov, and Denis Dimitrov. Kandinsky 3.0 technical re- port, 2023. Vladimir Arkhipkin, Andrei Filatov, Viacheslav Vasilev, Anastasia Maltseva, Said Azizov, Igor Pavlov, Julia Agafonova, Andrey Kuznetsov, and Denis Dimitrov. Kandinsky 3.0 technical re- port, 2024. URL https://arxiv.org/abs/2312.03511. Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. J. Mach. Learn. Res., 3:397–422, March 2003. ISSN 1532-4435. Sam Bond-Taylor, Peter Hessey, Hiroshi Sasaki, Toby P. Breckon, and Chris G. Willcocks. Un- resolution image generation from vector-quantized codes, 2021. URL https://arxiv.org/ abs/2111.12701. S´ebastien Bubeck, Gilles Stoltz, Csaba Szepesv´ari, and R´emi Munos. Online optimiza- tion in x-armed bandits. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou (eds.), Advances in Neural Information Processing Systems, volume 21. Curran Associates, Inc., URL https://proceedings.neurips.cc/paper_files/paper/2008/ 2008. file/f387624df552cea2f369918c5e1e12bc-Paper.pdf. S´ebastien Bubeck and Nicol`o Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi- armed bandit problems, 2012. URL https://arxiv.org/abs/1204.5721. Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. Pixart-α: Fast training of diffusion transformer for photorealistic text-to-image synthesis, 2023a. URL https://arxiv.org/abs/2310. 00426. Yutian Chen, Hao Kang, Yiyan Zhai, Liangze Li, Rita Singh, and Bhiksha Raj. Openllmtext dataset, 2023b. URL https://zenodo.org/doi/10.5281/zenodo.8285326. for multiple domains, 2020. URL https://arxiv.org/abs/1912.01865. Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Lev- skaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Under review as a conference paper at ICLR 2025 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Bren- nan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways, 2022. URL https://arxiv.org/abs/2204.02311. M. A. Efroymson. Multiple regression analysis. In A. Ralston and H. S. Wilf (eds.), Mathematical Methods for Digital Computers, volume 1, pp. 191–203. John Wiley & Sons, Inc., 1960. Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, Dustin Podell, Tim Dockhorn, Zion En- glish, Kyle Lacey, Alex Goodwin, Yannik Marek, and Robin Rombach. Scaling rectified flow transformers for high-resolution image synthesis, 2024. URL https://arxiv.org/abs/ 2403.03206. Matthias Feurer and Frank Hutter. Hyperparameter Optimization, pp. 3–33. Springer International Publishing, Cham, 2019. ISBN 978-3-030-05318-5. doi: 10.1007/978-3-030-05318-5 1. URL https://doi.org/10.1007/978-3-030-05318-5_1. Dan Friedman and Adji Bousso Dieng. The vendi score: A diversity evaluation metric for machine learning, 2023. URL https://arxiv.org/abs/2210.02410. Louay Hazami, Rayhane Mama, and Ragavan Thurairatnam. Efficient-vdvae: Less is more, 2022. URL https://arxiv.org/abs/2203.13751. Gans trained by a two time-scale update rule converge to a local nash equilibrium, 2018. URL https://arxiv.org/abs/1706.08500. Xiaoyan Hu, Ho fung Leung, and Farzan Farnia. An optimism-based approach to online evaluation of generative models, 2024. URL https://arxiv.org/abs/2406.07451. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6.1 OPTIMAL MIXTURE FOR DIVERSITY AND QUALITY VIA KID', 'after_section': None, 'context_after': 'Algorithm 3 Sparse-Mixture-UCB-CAB ', 'paragraph_idx': 49, 'before_section': None, 'context_before': 'FFHQ Generated Images. In this experiment, we used images generated by five different models: ', 'modified_lines': 'LDM (Rombach et al., 2022), StyleGAN-XL (Sauer et al., 2022), Efficient-VDVAE (Hazami et al., 2022), InsGen (Yang et al., 2021), and StyleNAT (Walton et al., 2022). We used 10,000 images from each model to determine the optimal mixture. A kernel bandwidth of 40 was used for calculating 23 Published as a conference paper at ICLR 2025 ', 'original_lines': 'LDM (Rombach et al., 2022), StyleGAN-XL (Sauer et al., 2022), Efficient-vdVAE (Hazami et al., 2022), Insgen (Yang et al., 2021), and StyleNAT (Walton et al., 2023). We used 10,000 images from each model to determine the optimal mixture, resulting in the weights (0.33, 0.57, 0, 0, 0.10). A 16 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6.1 OPTIMAL MIXTURE FOR DIVERSITY AND QUALITY VIA KID', 'after_section': None, 'context_after': 'the optimal mixture, resulting in weights of (0.51, 0, 0.49, 0). A kernel bandwidth of 40 was applied, and the algorithm was run for 8,000 sampling steps. The quality and diversity scores for each model, including the results for the optimal mixture based on KID, are presented in Table 2. Model LDM StyleGAN-XL StyleNAT Precision ↑ ', 'paragraph_idx': 49, 'before_section': None, 'context_before': 'score compared to the individual model with the best FID. LSUN-Bedroom We used images generated by four different models: StyleGAN (Karras et al., ', 'modified_lines': '2019), Projected GAN (Sauer et al., 2021), iDDPM (Nichol & Dhariwal, 2021), and Unleashing Transformers (Bond-Taylor et al., 2022). We utilized 10,000 images from each model to compute Truncated FFHQ. We used StyleGAN2-ADA (Karras et al., 2020) trained on FFHQ dataset to generate images. We randomly chose 8 initial points and used the Truncation Method (Marchesi, 24 Published as a conference paper at ICLR 2025 Efficient-VDVAE InsGen Optimal Mixture (KID) Mixture-UCB-CAB (KID) Mixture-UCB-OGD (KID) ', 'original_lines': '2019a), Projected GAN (Sauer et al., 2021a), iDDPM (Nichol & Dhariwal, 2021), and Unleashing Transformers (Bond-Taylor et al., 2021). We utilized 10,000 images from each model to compute 17 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 Under review as a conference paper at ICLR 2025 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 Efficient-vdVAE Insgen ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-02 08:44:31
ICLR.cc/2025/Conference
oaZRId1dby
TB4tRXUVWz
[{'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'Mixture-UCB-CAB and Mixture-UCB-OGD can both be regarded as generalizations of the origi- nal UCB algorithm, in the sense that they reduce to UCB when κ(x, x′) = 0. If we remove the ˆK(x)n(t) term in (8), then Mixture-UCB-OGD becomes the same as UCB. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'We present an alternative to Mixture-UCB-CAB, called the mixture upper confidence bound – online gradient descent (Mixture-UCB-OGD) algorithm, inspired by the online gradient descent algorithm (Shalev-Shwartz et al., 2012). It also has a parameter β > 1. Refer to Algorithm 2. ', 'modified_lines': ' ', 'original_lines': '', 'after_paragraph_idx': 2, 'before_paragraph_idx': None}]
2025-03-02 12:59:27
ICLR.cc/2025/Conference
L2YPaYaEuT
OeyFAgTjjR
[{'section': 'Abstract', 'after_section': None, 'context_after': 'XGBoost LightGBM CatBoost ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Weather Classical ML Baselines ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'MLP SNN DCNv2 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0.9290 Tabular DL Models ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '0.9500 0.9492 0.9392 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'MLP-PLR Trompt ', 'modified_lines': '', 'original_lines': 'Ensembles ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.1 EXPERIMENTAL SETUP AND TABULAR DEEP LEARNING TECHNIQUES', 'after_section': None, 'context_after': 'MLP ens. MLP-PLR ens. ', 'paragraph_idx': 47, 'before_section': None, 'context_before': '1.5177 3.6 ± 1.5 1.5722 6.8 ± 2.0 ', 'modified_lines': 'Ensembles ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'MLP aug. MLP aug. rec. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '1.4953 2.4 ± 1.5 Training Methodologies ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'TabR-S ModernNCA ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '1.5160 6.0 ± 2.0 Retrieval Augmented Tabular DL ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'P L M ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '1.34% 0.0% ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'P L M ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '− 0.0% ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'G¨unter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-normalizing neural networks. In NIPS, 2017. 1, 2, 6 Ravin Kohli, Matthias Feurer, Katharina Eggensperger, Bernd Bischl, and Frank Hutter. Towards quantifying the effect of datasets for benchmarking: A look at tabular machine learning. In ICLR ', 'paragraph_idx': 5, 'before_section': None, 'context_before': 'Polina Kirichenko, Pavel Izmailov, and Andrew Gordon Wilson. Last layer re-training is sufficient for robustness to spurious correlations. In The Eleventh International Conference on Learning ', 'modified_lines': 'Representations, 2023. URL https://openreview.net/forum?id=Zb6c8A-Fghk. 16, 20 Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Bal- subramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, et al. Wilds: A benchmark of in-the-wild distribution shifts. In International conference on machine learning, pp. 5637–5664. PMLR, 2021. 14 ', 'original_lines': 'Representations, 2023. URL https://openreview.net/forum?id=Zb6c8A-Fghk. 15, 19 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '12 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Duncan McElfresh, Sujay Khandagale, Jonathan Valverde, Ganesh Ramakrishnan, Micah Goldblum, Colin White, et al. When do neural nets outperform boosted trees on tabular data? arXiv preprint arXiv:2305.02997, 2023. 3, 4 ', 'modified_lines': '', 'original_lines': ' github MLWave. kaggle-acquire-valued-shoppers-challenge, 2014. URL https://github.com/ MLWave/kaggle_acquire-valued-shoppers-challenge. 17 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '15 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'MLPs when evaluated on temporally shifted test sets (Sberbank Housing, Cooking Time, Delivery ETA, Weather, Ecom Offers and Quote Conversion datasets – most clearly seen comparing MLP-PLR with XGBoost). This might indicate that GBDTs are less robust to shifts, conversely performing ', 'modified_lines': '', 'original_lines': 'better on random splits, by possibly exploiting time-based leakage. Another notable example is TabR-S outperforming the baseline MLP (Cooking Time and Homesite Insurance) and even XGBoost (Weather). A.2 DISTRIBUTION SHIFT ROBUSTNESS METHODS We evaluate two methods that aim to mitigate the effect of distribution shift. The first one is DeepCORAL (Sun & Saenko, 2016), we adapt the method to the temporal shift setting by bucketing timestamps into different domains, similar to Wild-Time (Yao et al., 2022). The second method is Deep Feature Reweighting (DFR) (Kirichenko et al., 2023), we adapt the method by finetuning the representation of the MLP baseline on the latter instances of the train dataset. Table 5: Study of distribution shift robustness methods on TabReD. Classification (ROC AUC ↑) Methods Homesite Insurance Ecom Offers HomeCredit Default Sberbank Housing Regression (RMSE ↓) Delivery ETA Cooking Time Maps Routing Average Rank Weather MLP CORAL DFR 0.9500 0.9498 0.9499 0.6015 0.6004 0.6013 0.8545 0.8549 0.8545 0.2508 0.2645 0.2494 0.4820 0.4821 0.4819 0.5504 0.5498 0.5515 0.1622 0.1622 0.1626 1.5470 1.0 ± 0.0 1.5591 1.4 ± 0.7 1.5513 1.4 ± 0.5 Both DFR and DeepCORAL do not improve upon the MLP baseline, in line with recent work by Gardner et al. (2023); Kolesnikov (2023) for other distribution shifts. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.1 EXPERIMENTAL SETUP AND TABULAR DEEP LEARNING TECHNIQUES', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2025 A.3 EXPLORATION OF TABRED DATASETS: FEATURE CORRELATIONS ', 'paragraph_idx': 44, 'before_section': None, 'context_before': '808 809 ', 'modified_lines': 'SberbankHousingEcomOffersMapsRoutingHomesiteInsuranceCookingTimeHomeCreditDefaultDeliveryETAWeather10−310−210−1Labeldistanced(Ytrain,Ytest)TemporalSplitRandomSplitSberbankHousingEcomOffersMapsRoutingHomesiteInsuranceCookingTimeHomeCreditDefaultDeliveryETAWeather10−210−1Top20FeaturesDistanced(Xtrain,Xtest)TemporalSplitRandomSplit0.240.260.280.30RMSE↓SberbankHousingTemporalSplitRandomSplit0.4600.4650.4700.4750.480CookingTimeTemporalSplitRandomSplit0.5300.5350.5400.5450.5500.555DeliveryETATemporalSplitRandomSplit0.1610.1620.1630.164MapsRoutingTemporalSplitRandomSplit1.351.401.451.501.55WeatherTemporalSplitRandomSplit0.9500.9550.9600.965AUC-ROC↑HomesiteInsuranceTemporalSplitRandomSplit0.5750.6000.6250.6500.6750.700EcomOffersTemporalSplitRandomSplit0.8450.8500.8550.8600.865HomeCreditDefaultMLPMLP(PLR)XGBoostTabR better on random splits, by possibly exploiting time-based leakage. Another notable example is TabR-S outperforming the baseline MLP (Cooking Time and Homesite Insurance) and even XGBoost (Weather). A.2 DISTRIBUTION SHIFT ROBUSTNESS METHODS We evaluate two methods that aim to mitigate the effect of distribution shift. The first one is DeepCORAL (Sun & Saenko, 2016), we adapt the method to the temporal shift setting by bucketing timestamps into different domains, similar to Wild-Time (Yao et al., 2022). The second method is Deep Feature Reweighting (DFR) (Kirichenko et al., 2023), we adapt the method by finetuning the representation of the MLP baseline on the latter instances of the train dataset. Table 5: Study of distribution shift robustness methods on TabReD. Classification (ROC AUC ↑) Methods Homesite Insurance Ecom Offers HomeCredit Default Sberbank Housing Regression (RMSE ↓) Delivery ETA Cooking Time Maps Routing Average Rank Weather MLP CORAL DFR 0.9500 0.9498 0.9499 0.6015 0.6004 0.6013 0.8545 0.8549 0.8545 0.2508 0.2645 0.2494 0.4820 0.4821 0.4819 0.5504 0.5498 0.5515 0.1622 0.1622 0.1626 1.5470 1.0 ± 0.0 1.5591 1.4 ± 0.7 1.5513 1.4 ± 0.5 Both DFR and DeepCORAL do not improve upon the MLP baseline, in line with recent work by Gardner et al. (2023); Kolesnikov (2023) for other distribution shifts. ', 'original_lines': 'TemporalSplitRandomSplit0.240.260.280.30RMSE↓SberbankHousingTemporalSplitRandomSplit0.4600.4650.4700.4750.480CookingTimeTemporalSplitRandomSplit0.5300.5350.5400.5450.5500.555DeliveryETATemporalSplitRandomSplit0.1610.1620.1630.164MapsRoutingTemporalSplitRandomSplit1.351.401.451.501.55WeatherTemporalSplitRandomSplit0.9500.9550.9600.965AUC-ROC↑HomesiteInsuranceTemporalSplitRandomSplit0.5750.6000.6250.6500.6750.700EcomOffersTemporalSplitRandomSplit0.8450.8500.8550.8600.865HomeCreditDefaultMLPMLP(PLR)XGBoostTabR ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2025 864 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '862 863 ', 'modified_lines': ' ', 'original_lines': '0.00.1Churn0.00.10.000.25CaliforniaHousing0.000.250.000.25House16H0.000.250.00.1Adult0.00.101Diamond010.000.25Otto0.000.250.000.05HiggsSmall0.000.0501BlackFriday010.000.25Covertype0.000.250.000.05Microsoft0.000.050.00.5SberbankHousing0.00.50.0000.025E-CommerceOffers0.0000.0250.00.5MapsRouting0.00.50.000.05HomesiteInsurance0.000.050.000.25CookingTime0.000.250.000.01HomecreditDefault0.000.010.000.25DeliveryETA0.000.250.00.5Weather0.00.5−1.00−0.75−0.50−0.250.000.250.500.751.00FeatureCorrelation ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Comments: One of the most popular tabular datasets, Adult was created by Barry Becker based on the 1994 Census database. The target variable is a binary indicator of whether a person has a yearly income above 50000$. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Year: 1994 ', 'modified_lines': '', 'original_lines': '20 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Comments: No time split, predicting customer’s purchase amount from demographic features. ”A retail company “ABC Private Limited” wants to understand the customer purchase behaviour ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '#Features: 9 Year: 2019 ', 'modified_lines': '', 'original_lines': ' 21 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 Under review as a conference paper at ICLR 2025 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'FACEBOOK COMMENTS VOLUME Tags: Leak, Tabular, Timesplit Needed ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Comments: This dataset comes from a 2008 competition ”Large Scale Learning Challenge” by the K4all foundation. The source of the data is unclear, the dataset might be synthetic. ', 'modified_lines': '', 'original_lines': '22 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '#Samples: 5032 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'KDDCUP09 UPSELLING Tags: Tabular, Timesplit Needed ', 'modified_lines': '', 'original_lines': ' 23 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'SGEMM GPU KERNEL PERFORMANCE Tags: Leak, Tabular ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'are the events? what if the distribution of these counts shifts over time?). No canonical split available, no details on the competition website on the nature of the features ', 'modified_lines': '', 'original_lines': '24 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Tags: Tabular, Timesplit Needed ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'TabR achieve large performance improvements. SPEEDDATING ', 'modified_lines': '', 'original_lines': ' Tags: Tabular, Timesplit Needed #Samples: 8378 #Features: 121 Year: 2004 Comments: This dataset describes experimental speed dating events that took place from 2002 to 2004. The data describes the responses of participants to a questionnaire, and the target variable is whether they matched or not. TABLESHIFT ASSISTMENTS Tags: Tabular, Timesplit Needed #Samples: 2600000 #Features: 16 Year: 2013 Comments: Predict whether the student answers correctly. Features include: student-, problem-, and school-level features, the dataset also contains affect predictions for students based on an experimental affect detector implemented in ASSISTments. Timesplit is not possible. TABLESHIFT CHILDHOOD LEAD Tags: Tabular, Timesplit Needed, Timesplit Possible #Samples: 27000 #Features: 8 Year: 2023 Comments: The data comes from CDC National Health and Nutrition Examination Survey, and the task in this dataset is to predict whether a person has high blood lead levels based on answers to a questionnaire. TABLESHIFT COLEGE SCORECARD ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'TABLESHIFT INCOME ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Comments: The data comes from MIMIC-III, describing records from Beth Israel Deaconess Medical Center. The data used in this dataset would be more effectively processed as time series and sequences. ', 'modified_lines': '', 'original_lines': ' 26 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '27 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'TABLESHIFT SEPSIS Tags: Semi- Tabular, Timesplit Needed ', 'modified_lines': '', 'original_lines': ' #Samples: 1500000 #Features: 41 Year: 2019 Comments: Predict whether a person will develop sepsis in the next 6 months based on the data about their health, including questionnaire answers and patient records. TABLESHIFT UNEMPLOYMENT Tags: Tabular, Timesplit Needed, Timesplit Possible #Samples: 1700000 #Features: 18 Year: 2018 Comments: The task is to predict whether a person is unemployed based on their answers to a survey. Data is provided by American Community Survey. TABLESHIFT VOTING Tags: Tabular, Timesplit Needed, Timesplit Possible #Samples: 8000 #Features: 55 Year: 2020 Comments: The prediction target for this dataset is to determine whether an individual will vote in the U.S presidential election, from a detailed questionnaire. It seems like the data goes all the way back to 1948, which makes this not realistic when not using time split VESSEL POWER R Tags: Tabular, Timesplit Needed, Timesplit Possible #Samples: 554642 #Features: 10 Year: 2022 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'ANALCATDATA SUPREME Tags: Tabular, Timesplit Needed, Timesplit Possible ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'the original data could be of any modality. There is no way to control a train / test split without task details. ', 'modified_lines': '', 'original_lines': '28 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'CNAE-9 Tags: HomE ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Comments: Dataset describes 17 marketing campaigns by a bank from 2008 to 2010. A set of features is not very rich, but reasonable (ideally there would be more user features and statistics). ', 'modified_lines': '', 'original_lines': '29 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 Under review as a conference paper at ICLR 2025 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '#Samples: 690 #Features: 16 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Tags: Tabular, Timesplit Needed ', 'modified_lines': '', 'original_lines': '30 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'time spent in the player’s club, as well as the price in release clause. This dataset does not correspond to any real-world task, and the provided features are very shallow, as they luck any information about a player’s performance in previous games ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Comments: This dataset contains information about FIFA soccer players in 2021, and the target variable is their wages. The provided features include age, weight, height, and information about ', 'modified_lines': '', 'original_lines': ' 31 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 Under review as a conference paper at ICLR 2025 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'JUNGLE-CHESS ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Comments: The data was provided by AutoML challenge, and the dataset was created from objects from another domain, such as text, audio, or video, compressed into tabular form. ', 'modified_lines': '', 'original_lines': ' 32 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '33 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'to classify lymph in one of four categories. Unfortunately, the dataset contains only 2 samples with normal lymph, making it hard for the dataset to be used for training a real-world model categorizing lymph. ', 'modified_lines': '', 'original_lines': ' MEDICAL CHARGES Tags: Tabular, Timesplit Needed #Samples: 163065 #Features: 3 Year: 2019 Comments: Public medicare data from 2019. According to openml analysis, only one of the features is important for prediction. MFEAT-FOURIER Tags: HomE #Samples: 2000 #Features: 77 Year: 1998 Comments: One of a set of 6 datasets describing features of handwritten numerals (0 - 9) extracted from a collection of Dutch utility maps. MFEAT-ZERNIKE Tags: HomE #Samples: 2000 #Features: 48 Year: 1998 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2025 1836 1837 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '1834 1835 ', 'modified_lines': '', 'original_lines': 'Comments: One of a set of 6 datasets describing features of handwritten numerals (0 - 9) extracted from a collection of Dutch utility maps. MONKS-PROBLEMS-2 Tags: Synthetic or Untraceable #Samples: 601 #Features: 7 Year: 1992 Comments: Simple toy synthetic, the task of determining whether there are exactly two ones among the 6 binary variables. NOMAO Tags: Tabular #Samples: 34465 #Features: 119 Year: 2013 Comments: Active learning dataset, the task is determining whether two geo-location points are the same. Hand-labeled by an expert of Nomao. NYC-TAXI-GREEN-DEC-2016 Tags: Tabular, Timesplit Needed #Samples: 581835 #Features: 9 Year: 2016 Comments: The data was provided by the New York City Taxi and Limousine Commission, and the task is to predict tip amount based on simple features describing the trip. PARTICULATE-MATTER-UKAIR-2017 Tags: Tabular, Timesplit Needed, Timesplit Possible #Samples: 394299 #Features: 6 Year: 2017 Comments: Hourly particulate matter air pollution data of Great Britain for the year 2017. Time features available, prior work uses random split. There are only 6 features, describing time and location. This is a time-series forecasting problem (2 features from the original dataset missing). This is more likely a time-series problem, as there are not many heterogeneous features related to the task, only time-based features PHONEME Tags: HomE #Samples: 5404 #Features: 6 Year: 1993 Comments: The dataset describes a collection of phonemes and presents a task of classifying between nasal and oral sounds. The phonemes are transcribed as follows: sh as in she, dcl as in dark, iy as the vowel in she, aa as the vowel in dark, and ao as the first vowel in water., DL in audio outperforms shallow methods, when applied to raw data. Here we only have 5 features extracted from the raw data (its audio) POKER-HAND Tags: Synthetic or Untraceable, Tabular #Samples: 1025009 #Features: 9 Year: 2007 Comments: A task of classifying a poker hand based on it’s content. One line non-ML solution exists, does not correspond to a real-world ML problem. 34 POL Tags: Tabular, Timesplit Needed #Samples: 10082 #Features: 26 Year: 1995 Comments: The data describes a telecommunication problem, no further information is available. PROFB Tags: Tabular, Timesplit Needed #Samples: 672 #Features: 10 Year: 1992 Comments: Dataset describing professional football games. The task is to predict whether the favoured team was playing home. QSAR-BIODEG Tags: HetE #Samples: 155 #Features: 42 Year: 2013 Comments: The QSAR biodegradation dataset was built by the Milano Chemometrics and QSAR Research Group. Nowadays, a different approach based on graph neural networks is taken towards the task of predicting the characteristics of molecules, which is why this is not really a realistic use-case for tabular DL RL Tags: Synthetic or Untraceable, Tabular #Samples: 4970 #Features: 12 Year: 2018 Comments: Unknown real-life problem. Small, not many features, No canonical split. Retrieval methods such as TabR achieve large performance gains, which could signal that there is leakage in the data. ROAD-SAFETY Tags: Tabular, Timesplit Needed, Timesplit Possible #Samples: 111762 #Features: 32 Year: 2015 Comments: The data describes road accidents in Great Britain from 1979 to 2015. The task is to predict sex of a driver based on information about an accident. Retrieval methods such as TabR achieve large performance gains, which could signal that there is leakage in the data. SOCMOB Tags: Tabular, Timesplit Needed #Samples: 1156 #Features: 6 Year: 1973 Comments: An instance represents the number of sons that have a certain job A given the father has the job B (additionally conditioned on race and family structure). Just statistic data, not a real task SPLICE Tags: Raw #Samples: 3190 #Features: 61 Year: 1992 Comments: The task is to classify parts of genom as splice regions. The features are just a subsequence of DNA, more of an NLP task 35 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-28 11:10:09
ICLR.cc/2025/Conference
OeyFAgTjjR
kmymIYnUY8
[{'section': '2 RELATED WORK', 'after_section': '2 RELATED WORK', 'context_after': 'find data-leakage issues and non-tabular, synthetic and anonymous/unknown datasets sometimes “sneaking” through automatic filters. Furthermore, these datasets do not represent conditions of temporal shift and extensive feature engineering that are common in practical applications. ', 'paragraph_idx': 13, 'before_section': '2 RELATED WORK', 'context_before': 'Tabzilla (McElfresh et al., 2023) and the Grinsztajn et al. (2022) benchmark have gained adoption in the research community. For example, such papers as Gorishniy et al. (2024), Chen et al. (2023b), ', 'modified_lines': 'Feuer et al. (2024), evaluate performance on these benchmarks. Both benchmarks primarily rely on the OpenML repository as a source of datasets, and filter datasets semi-automatically based on metadata like size and baseline performance. In our work, we look closer at all the datasets and ', 'original_lines': 'Feuer et al. (2024), demonstrate their performance on these benchmarks. Both benchmarks primarily rely on the OpenML repository as a source of datasets, and filter datasets semi-automatically based on metadata like size and baseline performance. In our work, we look closer at all the datasets and ', 'after_paragraph_idx': 13, 'before_paragraph_idx': 13}, {'section': 'Abstract', 'after_section': None, 'context_after': 'The field of benchmarks for time-series focuses on prediction of target variables in the future, as does our benchmark. Works such as Shchur et al. (2023) and Ansari et al. (2024) both contain ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'TableShift (Gardner et al., 2023) and WildTab (Kolesnikov, 2023) propose tabular benchmarks with distribution shifts between train/test subsets. These benchmarks are closer in spirit to TabReD, as both describe evaluation in unrepresented conditions. However, both benchmarks focus on out-of- ', 'modified_lines': 'distribution robustness and provide domain generalization methods comparison. We study a broader set of methods, including recent SoTA tabular neural networks. Furthermore, both benchmarks consider more “extreme” shifts, compared to the more ubiquitous gradual temporal shift which is present in all TabReD datasets. ', 'original_lines': 'distribution robustness and domain generalization methods comparison, not how many tabular-data specific methods perform in the new setting. We study a broader set of methods, including recent SoTA tabular neural networks. Furthermore, both benchmarks consider more “extreme” shifts, compared to the more ubiquitous gradual temporal shift which is present in all TabReD datasets. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Data Leakage, Synthetic and Non-Tabular Datasets. First, a considerable number of tabular datasets have some form of data leakage (11 out of 100). Leakage stems from data preparation ', 'paragraph_idx': 3, 'before_section': None, 'context_before': '✗ ✓ ', 'modified_lines': ' represents, it is unclear how transferable are advances on these datasets. The third issue is the usage of the data that belongs to other domains, e.g. image data flattened into an array of values. While such datasets correspond to a valid and useful task, it is unclear how useful are advances on such datasets in practice, since other domain-specific methods usually perform significantly better for this type of data. Table 1 summarizes our analysis of 100 unique classification and regression datasets from academic benchmarks (Gorishniy et al., 2022; Grinsztajn et al., 2022; Gorishniy et al., 2024; McElfresh et al., 2023; Chen et al., 2023b; Gardner et al., 2023). We also provide detailed meta-data collected in the process with short descriptions of tasks, original data sources, data quality issues and notes on temporal splits in the Appendix F. Our main findings are as follows. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '4.1 ON THE ROLE AND LIMITATIONS OF TABRED We see the TabReD benchmark as an important addition to the landscape of tabular datasets. While ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'subsection A.3. We also provide the table with our annotations of Kaggle competition, used to filter datasets from Kaggle in Appendix D. ', 'modified_lines': '', 'original_lines': 'Table 2: Short description of datasets in TabReD. Numbers in parentheses denote full data size. We use random subsets of large datasets to make extensive hyperparameter tuning feasible. Dataset # Samples # Features Source Task Description Sberbank Housing 20K Homesite Insurance 224K Ecom Offers 106K HomeCredit Default 381K (1.5M) Cooking Time Delivery ETA Maps Routing Weather 387 296 119 696 228K (10.6M) 195 225 224K (6.9M) 1026 192K (8.8M) 98 605K (6.0M) Kaggle Real estate price prediction Kaggle Insurance plan acceptance prediction Kaggle Predict whether a user will redeem an offers Kaggle Loan default prediction New New New New Weather prediction (temperature) Restaurant order cooking time estimation Grocery delivery courier ETA prediction Navigation app ETA from live road-graph features ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 ON THE ROLE AND LIMITATIONS OF TABRED', 'after_section': None, 'context_after': '5 5 HOW DO TABULAR DL TECHNIQUES TRANSFER TO TABRED CONDITIONS? ', 'paragraph_idx': 29, 'before_section': None, 'context_before': 'may limit some potential future applications of these datasets, like leveraging feature names and descriptions with LLMs. ', 'modified_lines': 'Table 2: Short description of datasets in TabReD. Numbers in parentheses denote full dataset sizes. We use random subsets of large datasets to make extensive hyperparameter tuning feasible. Dataset # Samples # Features Source Task Description 28K Sberbank Housing Ecom Offers 160K Homesite Insurance 260K HomeCredit Default 381K (1.5M) Cooking Time Delivery ETA Maps Routing Weather 392 119 299 696 319K (12.8M) 192 350K (17.0M) 223 279K (13.6M) 986 423K (16.9M) 103 Kaggle Real estate price prediction Kaggle Predict whether a user will redeem an offers Kaggle Insurance plan acceptance prediction Kaggle Loan default prediction New New New New Weather prediction (temperature) Restaurant order cooking time estimation Grocery delivery courier ETA prediction Navigation app ETA from live road-graph features Published as a conference paper at ICLR 2025 ', 'original_lines': 'Under review as a conference paper at ICLR 2025 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '7 CONCLUSION', 'after_section': None, 'context_after': 'REFERENCES ', 'paragraph_idx': 67, 'before_section': '7 CONCLUSION', 'context_before': 'REPRODUCIBILITY STATEMENT ', 'modified_lines': 'We describe our experimental setup in subsection 5.1 and Appendix C. The code is available at https://github.com/yandex-research/tabred • Dataset downloading and preprocessing is handled by the provided code. Look for the preprocessing folder in the repository. • Newly introduced datasets are avaialable at https://kaggle.com/TabReD • Further instructions to reproduce experiments and plots are in the README.md ', 'original_lines': 'We describe our experimental setup in subsection 5.1 and Appendix C. We also provide code with the submission. Dataset downloading and preprocessing is handled by the provided code (see the ./preprocessing folder. Newly introduced datasets would be available upon acceptance and are not available for anonymity. Instructions to reproduce experiments and plots are in the provided code README.md ', 'after_paragraph_idx': None, 'before_paragraph_idx': 67}]
2025-02-27 14:45:58
ICLR.cc/2025/Conference
9ZaxejqXHw
u4tmiVMxtn
[{'section': 'Abstract', 'after_section': None, 'context_after': 'G ≥ ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(cid:88) ', 'modified_lines': '', 'original_lines': 'ρk ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 CONVERGENCE RESULT FOR FULL-BATCH ADAFACTOR', 'after_section': None, 'context_after': '(cid:88) ', 'paragraph_idx': 28, 'before_section': None, 'context_before': '(cid:13) (cid:13) (cid:13) ', 'modified_lines': ' ρk ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-18 03:37:24
ICLR.cc/2025/Conference
u4tmiVMxtn
qJ0YEr6IXq
[{'section': '4 A REVIEW OF ADAFACTOR', 'after_section': '4 A REVIEW OF ADAFACTOR', 'context_after': 'where β1,k, β2,k ∈ (0, 1), thereby tripling the memory usage. The innovation in Adafactor lies in its method of approximating Vk by factoring it into two rank-1 matrices, specifically the row sums and column sums of Vk, thus sufficiently reducing the memory from 2mn to m + n. Although this factorization sacrifices some information about the squared gradients, Adafactor still delivers performance comparable to Adam in many real application tasks, making it a practical choice where memory is a constraint. In Adam, corrective terms are introduced into Mk and Vk, resulting in Increasing decay rate. ', 'paragraph_idx': 24, 'before_section': '4 A REVIEW OF ADAFACTOR', 'context_before': 'average update, Mk = β1,kMk−1 + (1 − β1,k)Gk, Vk = β2,kVk−1 + (1 − β2,k)Gk ⊙ Gk, ', 'modified_lines': ' (2) ', 'original_lines': ' (2) ', 'after_paragraph_idx': 24, 'before_paragraph_idx': 24}, {'section': '4 A REVIEW OF ADAFACTOR', 'after_section': '4 A REVIEW OF ADAFACTOR', 'context_after': 'Relative step-sizes. Adafactor incorporates a step-size proportional to scale of Xk, denoted by RMS(Xk), which is shown in experiments more resilient to the more naive parameter initialization ', 'paragraph_idx': 25, 'before_section': '4 A REVIEW OF ADAFACTOR', 'context_before': 'Mk and instead applies an update clipping technique inside the step-size ηk. This involves dividing the root-mean-square of the update Uk, denoted as RMS(Uk), when it exceeds a threshold d. This mechanism helps to calibrate the second moment estimator Wk when it’s larger-than-desired Gk⊙Gk. ', 'modified_lines': 'Empirical findings in (Shazeer & Stern, 2018) indicated that implementing update clipping leads to significant performance improvements when the warm-up technique is not used. We note that it differs from the standard clipping and it remains unknown whether it’s also needed in the heavy-tail case (Gorbunov et al., 2020) as the standard one. ', 'original_lines': 'Empirical findings in (Shazeer & Stern, 2018) indicated that implementing update clipping leads to significant performance improvements when the warm-up technique is not used. ', 'after_paragraph_idx': 26, 'before_paragraph_idx': 25}, {'section': '9 EXPERIMENTS', 'after_section': '9 EXPERIMENTS', 'context_after': '10 ', 'paragraph_idx': 68, 'before_section': '9 EXPERIMENTS', 'context_before': 'Limitations. Several limitations warrant further investigation. First, the polynomial dependency on ϵ1 in convergence bounds may be improved to a better one, such as log(1/ϵ1). Second, the convergence bound for stochastic vanilla Adafactor remains unknown. Third, the bounded stochastic ', 'modified_lines': 'gradient can be relaxed as it may be unpractical in LLMs (Zhang et al., 2020). Finally, it’s beneficial to further support our theoretical results through experiments on large language models. ', 'original_lines': 'gradient can be relaxed to some more realistic assumptions. Finally, it’s beneficial to further support our theoretical results through experiments on large language models. ', 'after_paragraph_idx': 68, 'before_paragraph_idx': 68}, {'section': 'Abstract', 'after_section': None, 'context_after': '11 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Bingrui Li, Jianfei Chen, and Jun Zhu. Memory efficient optimizers with 4-bit states. Advances in Neural Information Processing Systems, 36, 2024. ', 'modified_lines': '', 'original_lines': ' Haochuan Li, Ali Jadbabaie, and Alexander Rakhlin. Convergence of Adam under relaxed assump- tions. In Advances in Neural Information Processing Systems, 2023. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Fangyu Zou, Li Shen, Zequn Jie, Weizhong Zhang, and Wei Liu. A sufficient condition for conver- gences of Adam and RMSProp. In Proceedings of the IEEE Conference on Computer Vision and ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Dongruo Zhou, Jinghui Chen, Yuan Cao, Yiqi Tang, Ziyan Yang, and Quanquan Gu. On the convergence of adaptive gradient methods for nonconvex optimization. In Annual Workshop on Optimization for Machine Learning, 2020. ', 'modified_lines': '', 'original_lines': ' 12 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-22 02:37:22
ICLR.cc/2025/Conference
qJ0YEr6IXq
eCikScdERB
[{'section': '2 RELATED WORK', 'after_section': None, 'context_after': '2 ', 'paragraph_idx': 13, 'before_section': '2 RELATED WORK', 'context_before': 'the decay rate of the second moment is close to one. Several works (Chen et al., 2019; Zhou et al., 2020; Alacaoglu et al., 2020) provide convergence bounds for AMSGrad in non-convex smooth settings. A line of research, e.g., (Zaheer et al., 2018; De et al., 2018; Zou et al., 2019; Défossez et al., ', 'modified_lines': '2022) have investigated the convergence of Adam assuming bounded gradients and noise. Yao et al. (2021) designed AdaHessian, using Hutchinson’s approximation to estimate the diagonal Hessian. ', 'original_lines': '2022) have investigated the convergence of Adam assuming bounded gradients. These mentioned results successfully derive a convergence rate of ˜O(1/ T ), matching the lower bound as shown in (Arjevani et al., 2023). √ ', 'after_paragraph_idx': None, 'before_paragraph_idx': 13}, {'section': 'Abstract', 'after_section': None, 'context_after': '3 108 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '• (A4) Almost surely bounded stochastic gradient: for any X ∈ Rn×m, ∥g(X, Z)∥F ≤ G, a.s.. ', 'modified_lines': '', 'original_lines': 'Combining with (A3) and (A4), it’s easy to verify that ∥∇f (X)∥ ≤ G, ∀X ∈ Rn×m. Assumptions (A1)-(A3) are standard in the non-convex smooth convergence analysis. Although Assumption (A4) is a bit strong, it’s still commonly used to derive the high probability convergence bound, see e.g., (Ward et al., 2020; Kavis et al., 2022), which is a stronger result than an expected convergence.It’s ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 PROBLEM SETUP', 'after_section': '3 PROBLEM SETUP', 'context_after': 'also commonly appeared in several early convergence results for adaptive methods, e.g., (Kingma & Ba, 2015; Reddi et al., 2018; Zaheer et al., 2018; Défossez et al., 2022). We note that our analysis can be extended to the sub-Gaussian noise case, which is commonly used for analyzing adaptive ', 'paragraph_idx': 23, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2025 ', 'modified_lines': 'Combining with (A3) and (A4), it’s easy to verify that ∥∇f (X)∥ ≤ G, ∀X ∈ Rn×m. Assumptions (A1)-(A3) are standard in the non-convex smooth convergence analysis. Although Assumption (A4) is a bit strong, it’s still commonly used to derive the high probability convergence bound, see e.g., (Ward et al., 2020; Kavis et al., 2022), which is a stronger result than an expected convergence.It’s ', 'original_lines': '', 'after_paragraph_idx': 23, 'before_paragraph_idx': None}, {'section': '4 A REVIEW OF ADAFACTOR', 'after_section': '4 A REVIEW OF ADAFACTOR', 'context_after': 'Relative step-sizes. Adafactor incorporates a step-size proportional to scale of Xk, denoted by RMS(Xk), which is shown in experiments more resilient to the more naive parameter initialization ', 'paragraph_idx': 27, 'before_section': '4 A REVIEW OF ADAFACTOR', 'context_before': 'Mk and instead applies an update clipping technique inside the step-size ηk. This involves dividing the root-mean-square of the update Uk, denoted as RMS(Uk), when it exceeds a threshold d. This mechanism helps to calibrate the second moment estimator Wk when it’s larger-than-desired Gk⊙Gk. ', 'modified_lines': 'Empirical findings in (Shazeer & Stern, 2018) indicated that implementing update clipping leads to significant performance improvements when the warm-up technique is not used. ', 'original_lines': 'Empirical findings in (Shazeer & Stern, 2018) indicated that implementing update clipping leads to significant performance improvements when the warm-up technique is not used. We note that it differs from the standard clipping and it remains unknown whether it’s also needed in the heavy-tail case (Gorbunov et al., 2020) as the standard one. ', 'after_paragraph_idx': 28, 'before_paragraph_idx': 27}, {'section': '7 CONVERGENCE OF ADAFACTOR WITH UPDATE CLIPPING', 'after_section': None, 'context_after': '1 1 The time-increasing dk provides the following intuition: As shown in (Shazeer & Stern, 2018, Figure 8 SUMMARY OF PROOF CHALLENGES AND TECHNIQUES ', 'paragraph_idx': 43, 'before_section': '7 CONVERGENCE OF ADAFACTOR WITH UPDATE CLIPPING', 'context_before': '1 ', 'modified_lines': 'The additional hyper-parameter α primarily influences the dependency on ϵ1, specifically as log(1/ϵ1)(cid:1). Thus, our convergence bound may deteriorate as α increases. This depen- O (cid:0)ϵ−α dency could be potentially improved to O (cid:0)ϵ−1 log(1/ϵ1)(cid:1) when mn is comparable to 1/ϵ1, which is practical in large-size models.3 In our experiments, we found that suitably small values, such as α = 4, 6, 7, 8 can lead to convergence speed and training stability comparable to the default one (see Figure 5 and 6). This finding suggests that our new threshold setting plays a similar role in enhancing training stability as the default one, which is also the main motivation for update clipping. Since ϵ1 can be set to a relatively large value, e.g., 10−3, a dependency like O(ϵ−4 log(1/ϵ1)) is somewhat acceptable for sufficiently large T . 1), during the early stages of training, a high decay rate β2,k can cause larger-than-desired updates and training instability. Therefore, we set a low threshold dk to ensure that the update clipping mechanism effectively calibrates these larger-than-desired updates. As training progresses, the sequences and updates become more stable. Consequently, there is less need for update clipping, corresponding to a relatively large dk. ', 'original_lines': 'The additional hyper-parameter α primarily influences the dependency on ϵ1, specifically as O (cid:0)ϵ−α log(1/ϵ1)(cid:1). Thus, our convergence bound may deteriorate as α increases, possibly due to the limitation of our proof framework. This dependency could be potentially improved to log(1/ϵ1)(cid:1) when mn is comparable to 1/ϵ1, which is practical in large-size models.3 In our O (cid:0)ϵ−1 experiments, we found that suitably small values, such as α = 4, 6, 7, 8 can lead to convergence speed and training stability comparable to the default one without implementing the warm-up technique (see Figure 5 and 6). This finding suggests that our new threshold setting plays a similar role in enhancing training stability as the default one, which is also the main motivation for update clipping. Since ϵ1 can be set to a relatively large value, e.g., 10−3, a dependency like O(ϵ−4 log(1/ϵ1)) is somewhat acceptable for sufficiently large T . 1), during the early stages of training, a high decay rate β2,k can cause larger-than-desired updates and training instability. Therefore, we set a low threshold dk to ensure that the update clipping mechanism effectively calibrates these larger-than-desired updates. As training progresses, the sequences and updates become more stable, and the second moment estimator Wk becomes more accurate in estimating the squared gradients, which is also shown in (Shazeer & Stern, 2018, Figure 1). Consequently, there is less need for update clipping, corresponding to a relatively large dk. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 42}, {'section': 'Abstract', 'after_section': None, 'context_after': '7 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'A key difficulty in analyzing adaptive methods lies in computing the conditional expectation of (I) due to the correlation of Gk and Wk. To overcome this, existing analyses typically introduce a proxy ', 'modified_lines': 'step-size matrix Ak that is conditional independent of Gk. This approach is applied in works such as 3The detailed calculation could be found in (96) from the appendix. ', 'original_lines': ' 3The detailed calculation could be found in (92) from the appendix. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '(Ward et al., 2020; Défossez et al., 2022) for AdaGrad and (Wang et al., 2023; Hong & Lin, 2024) for Adam. Introducing Ak into (8) and summing up both sides over k ∈ [t], t ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2025 ', 'modified_lines': '', 'original_lines': 'step-size matrix Ak that is conditional independent of Gk. This approach is applied in works such as ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '8 SUMMARY OF PROOF CHALLENGES AND TECHNIQUES', 'after_section': None, 'context_after': 'ij − a(k) (cid:12) (cid:113) a(k) ', 'paragraph_idx': 49, 'before_section': None, 'context_before': '(cid:12) (cid:12) ', 'modified_lines': '(cid:12)w(k) ', 'original_lines': '(cid:12)w(k) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '8 SUMMARY OF PROOF CHALLENGES AND TECHNIQUES', 'after_section': None, 'context_after': 'Gk√ Wk Gk√ Vk ', 'paragraph_idx': 50, 'before_section': '8 SUMMARY OF PROOF CHALLENGES AND TECHNIQUES', 'context_before': 'inequality for Adam with a constant decay rate (Défossez et al., 2022, Lemma 5.2) to a time-varying setup. These results are summarized as (see the details in Lemma B.4 and B.5), (cid:13) ', 'modified_lines': '(cid:13) (cid:13) (cid:13) (1 − β2,k) (1 − β2,k) ', 'original_lines': '(cid:13) (cid:13) (cid:13) ', 'after_paragraph_idx': None, 'before_paragraph_idx': 50}, {'section': 'Abstract', 'after_section': None, 'context_after': 'G2 ϵ1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Gk√ Vk ', 'modified_lines': '', 'original_lines': '(1 − β2,k) (1 − β2,k) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '12 k=1', 'after_section': '12 k=1', 'context_after': '(3) ≲ O ', 'paragraph_idx': 58, 'before_section': '12 k=1', 'context_before': 'Here, (2) is a summation of a martingale difference sequence and (1) is an error term that can be estimated similarly to (B) in (9). The critical step is to handle the additional error term (3) using the ', 'modified_lines': 'maximum operator inside the update clipping (detailed in (109) and (110)), ', 'original_lines': 'maximum operator inside the update clipping (detailed in (105) and (106)), ', 'after_paragraph_idx': 58, 'before_paragraph_idx': 58}, {'section': '8 SUMMARY OF PROOF CHALLENGES AND TECHNIQUES', 'after_section': None, 'context_after': 'Solution. We first separate [t] into two index set E1 = (cid:8)k ∈ [t] | ∥Uk∥F ≥ d ', 'paragraph_idx': 46, 'before_section': None, 'context_before': 'c 2(α−1) . ', 'modified_lines': 'Challenge III. Lower bound first-order term (full-batch case). A central problem in full-batch case is to lower bound (I) in (15). Existing results on Adam, e.g., (De et al., 2018) obtain that ∥Vk∥∞ ≤ G2 based on exponential moving average property, thus lower bounding (I). However, Adafactor does not enjoy such a property. ', 'original_lines': 'Challenge III. Lower bound first-order term (full-batch case). A central problem in the proof of Theorem 5.1 is to lower bound (I) in (15). Existing results on Adam, e.g., De et al. (2018) obtain that ∥Vk∥∞ ≤ G2 based on exponential moving average property, thus lower bounding (I). However, Adafactor does not enjoy such a property. In addition, we should consider the effect of the update clipping. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '11 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Diederik P Kingma and Jimmy Ba. Adam: a method for stochastic optimization. In International Conference on Learning Representations, 2015. ', 'modified_lines': '', 'original_lines': ' Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Bingrui Li, Jianfei Chen, and Jun Zhu. Memory efficient optimizers with 4-bit states. Advances in Neural Information Processing Systems, 36, 2024. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Pengxiang Zhao, Ping Li, Yingjie Gu, Yi Zheng, Stephan Ludger Kölker, Zhefeng Wang, and Xiaoming Yuan. Adapprox: Adaptive approximation in adam optimization via randomized low- rank matrices. arXiv preprint arXiv:2403.14958, 2024. ', 'modified_lines': '', 'original_lines': ' 12 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '(cid:12) (cid:12) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ij ˜g(k) ij | ', 'modified_lines': '', 'original_lines': ' (cid:113) 1 a(k) ij − 1 w(k) ij (cid:114)(cid:12) (cid:12)w(k) (cid:12) ij − a(k) ij ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '+ 4 t ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ˆρk · ', 'modified_lines': '', 'original_lines': 'Using Lemma B.7 and (100), we further derive that ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '8 SUMMARY OF PROOF CHALLENGES AND TECHNIQUES', 'after_section': None, 'context_after': '¯Gk √ Ak 4 (cid:13) 2 (cid:13) ', 'paragraph_idx': 50, 'before_section': '8 SUMMARY OF PROOF CHALLENGES AND TECHNIQUES', 'context_before': '(cid:13) (cid:13) ', 'modified_lines': '(cid:13) (cid:13) (cid:13) (cid:13) ¯Gk √ Ak 4 ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 50}, {'section': '8 SUMMARY OF PROOF CHALLENGES AND TECHNIQUES', 'after_section': None, 'context_after': '(cid:112)1 − β2,k ', 'paragraph_idx': 47, 'before_section': None, 'context_before': 'k=1 t (cid:88) ', 'modified_lines': ' k=1 ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '32 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'F . ', 'modified_lines': '', 'original_lines': ' k=1 k=1 Using (94), Lemma B.4 and Lemma B.5, we further have (cid:13) 2 (cid:13) (cid:13) (cid:13) G(ϵ2 + Θmax)ρ0 ¯Gk √ Ak D.1 ≤ t (cid:88) t (cid:88) (cid:13) (cid:13) (cid:13) (cid:13) + 4 1 4 ˆρk √ 4 F k=1 k=1 t (cid:88) k=1 ≤ 1 4 ˆρk (cid:13) (cid:13) (cid:13) (cid:13) ¯Gk √ Ak 4 (cid:13) 2 (cid:13) (cid:13) (cid:13) F + 8mnG 3 2 (ϵ2 + Θmax)ρ0 max{m, n}ϵ1 31 (1 − β2,k) (cid:34) (cid:18) log 2 + (cid:13) (cid:13) (cid:13) (cid:13) Gk√ Wk (cid:13) 2 (cid:13) (cid:13) (cid:13) F (cid:19) 2G2 ϵ1 t (cid:88) + 4 (cid:35) (1 − β2,k) . k=1 (101) 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 Under review as a conference paper at ICLR 2025 To avoid the curse of dimension, we apply Lemma B.7, (94) and (78) to derive that D.1 ≤ ≤ ≤ 1 4 1 4 1 4 t (cid:88) k=1 t (cid:88) k=1 t (cid:88) k=1 ˆρk ˆρk ˆρk (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) ¯Gk √ Ak 4 ¯Gk √ Ak 4 ¯Gk √ Ak 4 (cid:13) 2 (cid:13) (cid:13) (cid:13) F (cid:13) 2 (cid:13) (cid:13) (cid:13) F (cid:13) 2 (cid:13) (cid:13) (cid:13) F + 4(G1 + G2) t (cid:88) k=1 (cid:112)1 − β2,k ˆρk (cid:13) (cid:13) (cid:13) (cid:13) Gk√ Wk (cid:13) 2 (cid:13) (cid:13) (cid:13) F + 4(G1 + G2)(ϵ2 + Θmax)ρ0 t (cid:88) k=1 1 kc/2+1/2 (cid:13) (cid:13) (cid:13) (cid:13) Gk√ Wk (cid:13) 2 (cid:13) (cid:13) (cid:13) F + 4G3(G1 + G2)(ϵ2 + Θmax)ρ0 t (cid:88) k=1 1 kc/2+1/2 . (102) Estimating D.2 Since Ak is independent from Zk, it further leads to (cid:105)(cid:29) (cid:28) ¯Gk√ Ak Then, the deduction for estimating D.2 follows the similar idea as in Lemma B.6, relying on a martingale difference sequence. , ˜Gk − EZk D.2 = − (cid:104) ˜Gk t (cid:88) ˆρk k=1 . (cid:68) ¯Gk√ Ak , ˜Gk − EZk (cid:105)(cid:69) (cid:104) ˜Gk Let us set φk = −ˆρk and the filtration Fk = σ (Z1, · · · , Zk). Noting that ˆρk, ¯Gk and Ak are dependent by Fk−1. Since ξk is dependent by Fk, we could prove that {φk}k≥1 is a martingale difference sequence by showing that E [φk | Fk−1] = −ˆρk (cid:28) ¯Gk√ Ak (cid:104) ˜Gk − EZk [ ˜Gk] (cid:105)(cid:29) , EZk = 0. In addition, using Assumptions (A3), (A4) and Jensen’s inequality, we have ∥ ˜Gk∥F = ∥Gk∥F max{1, ∥Uk∥/(dk √ mn)} Therefore, we derive that ≤ ∥Gk∥F ≤ G, ∥EZk [ ˜Gk]∥F ≤ EZk ∥ ˜Gk∥F ≤ G. Let ω′ k = 2Gˆρk (cid:13) (cid:13) (cid:13) ¯Gk√ Ak ∥ ˜Gk − EZk [ ˜Gk]∥F ≤ ∥ ˜Gk∥F + ∥EZk [ ˜Gk]∥F ≤ 2G. (cid:13) (cid:13) (cid:13)F . We thus derive from the Cauchy-Schwarz inequality and (103) that (103) (cid:20) exp E (cid:19) (cid:18) φ2 k (ω′ k)2 (cid:21) | Fk−1 ≤ E exp (cid:13) (cid:13) (cid:13) ¯Gk√ Ak (cid:13) 2 (cid:13) (cid:13) F ∥ ˜Gk − EZk [ ˜Gk]∥2 (cid:13) 2 (cid:13) (cid:13) ¯Gk√ Ak (cid:13) (cid:13) (cid:13) F F 4G2 | Fk−1 ≤ exp(1). Then, using Lemma B.1, it leads to that for any λ > 0, with probability at least 1 − δ, D.2 = t (cid:88) k=1 φk ≤ 3λG2 t (cid:88) k=1 ˆρ2 k (cid:13) (cid:13) (cid:13) (cid:13) = 3λG2 t (cid:88) n (cid:88) m (cid:88) k=1 i=1 j=1 (cid:113) ˆρk a(k) ij · ˆρk + 1 λ log (cid:19) (cid:18) 1 δ (cid:13) 2 (cid:13) (cid:13) (cid:13) F ¯Gk√ Ak (cid:16) (cid:17)2 ¯g(k) ij (cid:113) a(k) ij + 1 λ log (cid:19) . (cid:18) 1 δ Since {β2,k}k≥2 is non-decreasing, we could apply Lemma B.3 to derive that (cid:115) ≤ (cid:113) 1 a(k) ij (cid:115) 1 β2,k(1 − β2,k)ϵ1 ≤ 1 min{β2,1, β2,2}(1 − β2,k)ϵ1 ≤ 2 (cid:112)(1 − β2,k)ϵ1 . Then, we apply (94), and re-scale δ to obtain that for any λ > 0, with probability at least 1 − δ, for all t ∈ [T ], Setting λ = √ t (cid:88) D.2 ≤ 6λG2ρ0(ϵ2 + Θmax) √ ϵ1 (cid:13) (cid:13) (cid:13) (cid:13) ϵ1/(24G2ρ0(ϵ2 + Θmax)), we derive that (cid:13) (cid:13) (cid:13) (cid:13) ¯Gk √ Ak D.2 ≤ (cid:13) 2 (cid:13) (cid:13) (cid:13) t (cid:88) 1 4 ˆρk ˆρk k=1 + 4 F k=1 ¯Gk √ Ak 4 (cid:13) 2 (cid:13) (cid:13) (cid:13) F + 1 λ log (cid:19) . (cid:18) T δ 24G2ρ0(ϵ2 + Θmax) √ ϵ1 log (cid:19) . (cid:18) T δ (104) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '1944 1945 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'not sensitive to the choice of ϵ1, and a relatively large ϵ1 can still lead to convergence, making the polynomial dependency O(1/ϵ1) in our convergence bounds acceptable. ', 'modified_lines': '', 'original_lines': '1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 35 Under review as a conference paper at ICLR 2025 (a) ResNet-20 on CIFAR-10 (b) ResNet-20 on CIFAR-100 (c) ResNet-110 on CIFAR-100 Figure 2: Average training loss curve under different decay rate parameters c. 36 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 05000100001500020000Step t0.00.51.01.52.0Training Lossc=0.5c=0.6c=0.7c=0.8c=0.9c=1.005000100001500020000Step t0246Training Lossc=0.5c=0.6c=0.7c=0.8c=0.9c=1.005000100001500020000Step t0246Training Lossc=0.5c=0.6c=0.7c=0.8c=0.9c=1.0 Under review as a conference paper at ICLR 2025 (a) ResNet-20 on CIFAR-10 (b) ResNet-20 on CIFAR-100 (c) ResNet-110 on CIFAR-100 Figure 3: Average test accuracy and standard deviation (shallow blue region) under different decay rate parameters c. 37 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '2106 2107 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'speed compared to the default threshold (represented by "Baseline"), which helps to complement the theoretical results in Theorem 7.1. ', 'modified_lines': '', 'original_lines': '1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 38 Under review as a conference paper at ICLR 2025 (a) ResNet-20 on CIFAR-10 (b) ResNet-20 on CIFAR-100 (c) ResNet-110 on CIFAR-100 Figure 4: Training loss vs. steps using Adafactor without update clipping under different ϵ1. The step-size ηt, decay rate β2,k, and learning rate warm-up are set by default. 39 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 020000400006000080000Step t01234Training Loss1=10301=10151=1081=1051=1031=101020000400006000080000Step t0246Training Loss1=10301=10151=1081=1051=1031=101020000400006000080000Step t02468Training Loss1=10301=10151=1081=1051=1031=101 Under review as a conference paper at ICLR 2025 (a) ResNet-20 on CIFAR-10 (b) ResNet-20 on CIFAR-100 (c) ResNet-110 on CIFAR-100 Figure 5: Training loss vs. steps on different models and datasets. We use step-size without warm-up technique and test under different α. 40 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-25 15:51:54
ICLR.cc/2025/Conference
eCikScdERB
dLLQefSAiN
[{'section': 'Abstract', 'after_section': None, 'context_after': 'G ≥ ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(cid:88) ', 'modified_lines': '', 'original_lines': 'ρk ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 CONVERGENCE RESULT FOR FULL-BATCH ADAFACTOR', 'after_section': None, 'context_after': '(cid:88) ', 'paragraph_idx': 29, 'before_section': None, 'context_before': '(cid:13) (cid:13) (cid:13) ', 'modified_lines': ' ρk ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6 STOCHASTIC ADAFACTOR WITHOUT UPDATE CLIPPING', 'after_section': None, 'context_after': 'β2,1 = 1/2, β2,k = 1 − 1/kc, ', 'paragraph_idx': 34, 'before_section': None, 'context_before': 'We also assume that the gradient is bounded, satisfying that ∥∇f (X)∥ ≤ G0, ∀X ∈ Rn×m. Then, we have the following convergence bound. Theorem C.1. Let {Xk}k≥1 be generated by Algorithm 1 without update clipping where ηk is given ', 'modified_lines': 'by (4) for each k ≥ 1. If Assumptions (A1)-(A3) hold, ∥∇f (X)∥F ≤ G0, ∀X ∈ Rn×m, Assumption 1 holds, and ', 'original_lines': 'by (4) for each k ≥ 1. If Assumptions (A1)-(A3) hold and Assumption 1 holds, and ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-26 07:20:13
ICLR.cc/2025/Conference
QCIUUbZYPn
1QW0CqZ2JY
[{'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'cost. Without the need to retrain the model, our algorithm is plug-and-play and easy to deploy. Experimental results indicate that SSD outperforms existing de- fenses, in terms of MIA resistance and model’s utility, across various attack al- ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'state-of-the-art white-box attacks, and existing defenses cannot resist them ef- fectively. To fill this gap, we propose Stealthy Shield Defense (SSD), a post- processing algorithm against black-box MIAs. Our idea is to modify the model’s ', 'modified_lines': 'outputs to minimize the conditional mutual information (CMI). We mathemati- cally prove that CMI is a special case of information bottlenecks (IB), and thus inherits the advantages of IB—making predictions less dependent on inputs and more dependent on ground truths. This theoretically guarantees our effective- ness, both in resisting MIAs and preserving utility. For minimizing CMI, we for- mulate a convex optimization problem and solve it via the water-filling method. Adaptive rate-distortion is introduced to constrain the modification to the out- puts, and the water-filling is implemented on GPUs to address computational ', 'original_lines': 'outputs to minimize the conditional mutual information (CMI). We mathemat- ically prove that CMI is a special case of information bottlenecks (IB), and thus inherits the advantages of IB—making predictions less dependent on inputs and more dependent on ground truths. This theoretically guarantees our effec- tiveness, both in resisting MIAs and preserving utility. For minimizing CMI, we formulate a convex optimization problem and solve it via the water-filling method. Adaptive rate-distortion is introduced to reduce the modification to the outputs, and the water-filling is implemented on GPUs to address computational ', 'after_paragraph_idx': 2, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'MIAs are divided into white-box and black-box (Fang et al., 2024c). White-box attackers know the details of the model, whereas black-box attackers can only query the model and obtain outputs. ', 'paragraph_idx': 3, 'before_section': None, 'context_before': 'INTRODUCTION ', 'modified_lines': 'Deep neural networks (DNNs) have driven widespread deployment in multiple mission-critical do- mains, such as computer vision (He et al., 2016), natural language processing (Devlin et al., 2019) and dataset distillation (Zhong et al., 2024b;a). However, their integration with sensitive training data has raised concerns about privacy breaches. Recent studies (Fang et al., 2024b;a; 2025) have explored various attack methods to probe these privacy, such as gradient inversion (Fang et al., 2023; Yu et al., 2024b) and membership inference (Hu et al., 2022). Among the emergent threats, model inversion attacks (MIAs) aim to reconstruct the private training data by accessing a public model, posing the greatest risk (Qiu et al., 2024c). For instance, consider a face recognition access control system with a publicly accessible interface. Through carefully crafted malicious queries, model in- version attackers can infer the sensitive facial images stored in the system, along with the associated user identities. ', 'original_lines': 'The rapid advancement of deep neural networks (DNNs) has driven their widespread deployment in multiple mission-critical domains, such as computer vision (He et al., 2016), natural language pro- cessing (Devlin et al., 2019) and dataset distillation (Zhong et al., 2024b;a). However, their growing integration with sensitive data has raised significant concerns about privacy vulnerabilities. Recent studies (Fang et al., 2024b;a; 2025) have explored various attack methods to probe these privacy vulnerabilities, such as gradient inversion (Fang et al., 2023; Yu et al., 2024b) and membership in- ference (Hu et al., 2022). Among the emergent privacy threats, model inversion attacks (MIAs) (Qiu et al., 2024b) aim to reconstruct private training data by accessing a machine learning model, posing significant risks of privacy breaches. For instance, consider a face recognition access control system with a publicly accessible interface. Through carefully crafted malicious queries, attackers might employ MIA to infer the sensitive facial images stored in the system database, along with the associated user identities, leading to privacy leakage. ', 'after_paragraph_idx': 4, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '1 Published as a conference paper at ICLR 2025 black-box attacks effectively. Existing defenses focus on modifying the weights and structure of To address these concerns, we propose Stealthy Shield Defense (SSD), a post-processing algorithm The contributions of this paper are: • We propose a post-processing algorithm to minimize CMI without retraining models. In our algorithm, temperature is introduced to calibrate the probabilities and adaptive rate- distortion is introduced to constrain the modification to the outputs. We speed up our 2 RELATED WORK 2.1 MODEL INVERSION ATTACKS AND DEFENSES 2 Inputs 𝑿Target ModelDistributions of 𝒀with High CMIPost-ProcessingAlgorithmDistributions of 𝒀with Low CMIGroundTruthLabels 𝒀⋯AliceBobChris𝑰𝑿;𝒀↓⋯𝑰𝒀;𝒀↑ Published as a conference paper at ICLR 2025 2.2 INFORMATION BOTTLENECK AND CONDITIONAL MUTUAL INFORMATION model should compress the redundant information in inputs while preserving the useful information In this paper, we theoretically prove that CMI is a special case of IB and thus inherits the advantages of IB. Furthermore, we propose a novel model inversion defense based on CMI. ', 'paragraph_idx': 4, 'before_section': '1 INTRODUCTION', 'context_before': 'more common. As models grow larger nowadays, they are mostly stored on servers and can only be accessed online, which are typical black-box scenarios. (2) Black-box attacks are more powerful. The latest soft-label attack RLBMI (Han et al., 2023) and hard-label attack LOKT (Nguyen et al., ', 'modified_lines': '2024) have outperformed the state-of-the-art white-box attacks. (3) Existing defenses cannot resist the model, but black-box attackers only exploit the outputs, and thus are less susceptible. against black-box MIAs. As shown in Figure 1, the idea of SSD is to modify the model’s outputs to minimize the conditional mutual information (CMI) (Yang et al., 2024). CMI quantifies the dependence between inputs and predictions when ground truths are given. In Theorem 1, we prove that CMI is a special case of information bottlenecks (IB), and thus inherits the advantages of IB— making predictions less dependent on inputs and more dependent on ground truths. Under this theoretical guarantee, SSD achieves a better trade-off between MIA resistance and model’s utility. Without the need to retrain the model, SSD is plug-and-play and easy to deploy. Figure 1: An overview of Stealthy Shield Defense. The probability simplex is a triangle when the number of classes is three. CMI is defined as I(X; ˆY |Y ). According to our Theorem 1, minimizing CMI makes the mutual information I(X; ˆY ) minimized and I( ˆY ; Y ) maximized. As shown by Yang et al. (2024), minimizing CMI makes the outputs more concentrated class-wisely. • We introduce CMI into model inversion defense for the first time, and theoretically prove its effectiveness. algorithm by GPU-based water-filling method as well. • Our experiments indicate that we outperform all competitors, in terms of MIA-resistance and model’s utility, exhibiting good generalizability across various attack algorithms, train- ing datasets, and model architectures. Model inversion attacks (MIAs) are a serious privacy threat to released models (Fang et al., 2024c). MIAs are categorized as white-box (Zhang et al., 2020; Chen et al., 2021; Struppek et al., 2022; Yuan et al., 2023; Qiu et al., 2024a) and black-box. We focus on black-box MIAs, where attackers can only query the model and obtain outputs. In this scenario, BREP (Kahla et al., 2022) utilizes zero-order optimization to drive the latent vectors away from the decision boundary. Mirror (An et al., 2022) and C2F (Ye et al., 2023) explore genetic algorithms. LOKT (Nguyen et al., 2024) trains multiple surrogate models and applies white-box attacks to them. To address the threat of MIAs, a variety of defenses have been proposed. MID (Wang et al., 2021), BiDO (Peng et al., 2022) and LS (Struppek et al., 2024) change the training losses, TL (Ho et al., 2024) freezes some layers of the model, and CA-FaCe (Yu et al., 2024a) change the structure of the model. However, black-box attackers only exploit the outputs, and thus are rarely hindered. The defense against black-box MIAs is still limited. In this paper, we propose a novel black-box defense based on post-processing, without retraining the model. Experimental results indicate that we outperform the existing defenses. Tishby et al. (1999) proposed the Information Bottleneck (IB) principle: a good machine learning for tasks. They later highlighted that information is compressed layer-by-layer in DNNs (Tishby & Zaslavsky, 2015; Shwartz-Ziv & Tishby, 2017). Alemi et al. (2017) proposed Variational Infor- mation Bottleneck (VIB) to estimate the bounds of IB, and Wang et al. (2021) applied VIB in their Mutual Information-based Defense (MID). Yang et al. (2024) proposed to use conditional mutual information (CMI) as a performance metric for DNNs, providing the calculation formula and geometric interpretation of CMI. By minimizing CMI, they improve classifiers (Yang et al., 2025) and address class imbalance (Hamidi et al., 2024). By maximizing CMI, they improve knowledge distillation (Ye et al., 2024) and address nasty teachers (Yang & Ye, 2024). ', 'original_lines': '2024) have outperformed the state-of-the-art white-box attacks. (3) Existing defenses cannot resist the model, but black-box attackers only exploit the outputs, and are thus less susceptible. against black-box MIAs. The idea of SSD is to modify the model’s outputs to minimize the con- ditional mutual information (CMI), as shown in Figure 1. CMI quantifies the dependence between inputs and predictions when the ground truth labels are given. In Theorem 1, we prove that CMI is a special case of information bottlenecks (IB), and thus inherits the advantages of IB—making predictions less dependent on inputs and more dependent on ground truths. Under this theoretical guarantee, SSD achieves a better trade-off between MIA resistance and model’s utility. Without the need to retrain the model, SSD is plug-and-play and easy to deploy. Figure 1: An overview of Stealthy Shield Defense (SSD). The probability simplex is a triangle when the number of classes is three. CMI is defined as I(X; ˆY |Y ). According to our Theorem 1, minimizing CMI makes the mutual information I(X; ˆY ) decreased and I( ˆY ; Y ) increased. As shown by Yang et al. (2024), minimizing CMI makes the outputs more concentrated class-wisely. • We introduce CMI into model inversion defense and prove that CMI is a special case of IB, theoretically guaranteeing the effectiveness of CMI. algorithm by GPU-accelerated water-filling method as well. • Our experiments demonstrate that SSD outperforms all competitors, in terms of MIA- resistance and model’s utility, exhibiting good generalizability across various attack al- gorithms, training datasets, and model architectures. Model inversion attacks (MIAs) are a serious privacy threat to released models in the inference time (Fang et al., 2024c). MIAs can be divided into white-box attacks (Zhang et al., 2020; Chen et al., 2021; Yuan et al., 2023; Struppek et al., 2022; Qiu et al., 2024a) and black-box attacks (Kahla et al., 2022; An et al., 2022; Ye et al., 2023; Nguyen et al., 2024; Han et al., 2023). In this paper, we only focus on the black-box attacks, where attackers can only access the output of the victim model. In this scenario, BREP (Kahla et al., 2022) utilizes zero-order optimization to urge the latent vector to move away gradually from the decision boundary. Mirror (An et al., 2022) and C2F (Ye et al., 2023) explore the genetic algorithm. LOKT (Nguyen et al., 2024) train multiple surrogate models and apply white-box attacks to them. In order to reduce the threat of model inversion attacks, a variety of defense algorithms have been proposed (Wang et al., 2021; Peng et al., 2022; Struppek et al., 2024; Ho et al., 2024; Yu et al., 2024a). They mainly focus on the white-box scenario. However, the defense research in the black- box scenarios is limited. Moreover, these researches typically defense by modifying either training losses (Wang et al., 2021; Peng et al., 2022; Struppek et al., 2024) or model architectures (Yu et al., 2024a), which have a significant impact on model performance. To overcome the limitations, we propose a plug-and-play post-processing method. Tishby et al. (1999) proposed the information bottleneck (IB) principle: a good machine learning for tasks. They later highlighted that information is compressed layer-by-layer in DNNs (Tishby & Zaslavsky, 2015; Shwartz-Ziv & Tishby, 2017). Alemi et al. (2017) proposed variational information bottleneck (VIB) to estimate the bounds of IB, and Wang et al. (2021) applied VIB in their model inversion defense. Yang et al. (2024) proposed to use conditional mutual information (CMI) as a performance metric for DNNs, providing the calculation formula and geometric interpretation of CMI. By minimizing CMI, they improved classifiers (Yang et al., 2025) and addressed class imbalance (Hamidi et al., 2024). By maximizing CMI, they improved knowledge distillation (Ye et al., 2024) and addressed nasty teachers (Yang & Ye, 2024). ', 'after_paragraph_idx': None, 'before_paragraph_idx': 4}, {'section': '2.1 MODEL INVERSION ATTACKS AND DEFENSES', 'after_section': None, 'context_after': 'I(X; ˆY ) := ', 'paragraph_idx': 9, 'before_section': None, 'context_before': '3.3 DEFENSE VIA MUTUAL INFORMATION ', 'modified_lines': 'Wang et al. (2021) proposed Mutual Information-based Defense (MID). The mutual information between X and ˆY is defined as ', 'original_lines': 'Wang et al. (2021) proposed to resist MIAs via the mutual information, which is defined as ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '1Some literature refer to hard-label as label-only, and soft-label as black-box. 3 ', 'paragraph_idx': 7, 'before_section': None, 'context_before': '(1) ', 'modified_lines': 'I(X; ˆY ) quantifies the dependence between X and ˆY . They minimize it to prevent attackers from obtaining the information about D. However, minimizing I(X; ˆY ) hurts the model’s utility. Es- pecially, I(X; ˆY ) = 0 iff X and ˆY are independent, in which case f is immune to any attack but useless at all. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.3 DEFENSE VIA MUTUAL INFORMATION', 'after_section': '3.3 DEFENSE VIA MUTUAL INFORMATION', 'context_after': 'As an alternative, they introduced information bottlenecks (IB), which is defined as I(X; Z) − λ · I(Z; Y ), (2) 4 METHODOLOGY 4.1 DEFENSE VIA CONDITIONAL MUTUAL INFORMATION We aim to resist black-box MIAs, so we still focus on ˆY rather than Z. Furthermore, we observe Dy := {x ∈ X | (x, y) ∈ D} I(X; ˆY |Y = y) := ', 'paragraph_idx': 20, 'before_section': None, 'context_before': 'Published as a conference paper at ICLR 2025 ', 'modified_lines': 'where λ > 0. They use (2) as a regularizer to train f , minimizing I(X; Z) to resist MIAs while maximizing I(Z; Y ) to preserve model’s utility. that all MIA algorithms target one fixed label during attacking. Formally, let be the sub-dataset whose ground truth label is y. Given y ∈ Y, all MIA algorithms aim to reconstruct ˆDy as close to Dy as possible. Against their intention, we propose to minimize ', 'original_lines': 'I(X; ˆY ) quantifies the dependence between X and ˆY . They minimized it to prevent attackers from obtaining the information about D, but they hurt the model’s utility. Especially, I(X; ˆY ) = 0 iff X and ˆY are independent, in which case f is immune to any attack but useless at all. where λ > 0. They used (2) as a regularizer to train f , minimizing I(X; Z) to resist MIAs while maximizing I(Y ; Z) to preserve model’s utility. They achieved a better trade-off by adjusting λ. that all MIA algorithms target one fixed label during attacking. Formally, given y ∈ Y, let be the training subset whose ground truth label is y, and attackers aim to reconstruct ˆDy as close to Dy as possible. Based on their behavior, we propose to resist MIAs via ', 'after_paragraph_idx': 20, 'before_paragraph_idx': None}, {'section': '3.3 DEFENSE VIA MUTUAL INFORMATION', 'after_section': None, 'context_after': 'I(X; ˆY |Y ) := ', 'paragraph_idx': 18, 'before_section': None, 'context_before': '(3) I(X; ˆY |Y = y) quantifies the dependence between X and ˆY when Y = y. We minimize it to ', 'modified_lines': 'prevent attackers from obtaining the information about Dy. Minimizing (3) on each y ∈ Y is equivalent to minimizing the conditional mutual information (CMI), which is defined as ', 'original_lines': 'prevent attackers from obtaining the information about Dy. Minimizing (3) on each y ∈ Y is equivalent to minimizing the conditional mutual information (CMI), which is defined as ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 DEFENSE VIA CONDITIONAL MUTUAL INFORMATION', 'after_section': '4.1 DEFENSE VIA CONDITIONAL MUTUAL INFORMATION', 'context_after': 'The I(X; Z) in (2) is challenging to calculate because the input space X and feature space Z are 4 ', 'paragraph_idx': 23, 'before_section': '4.1 DEFENSE VIA CONDITIONAL MUTUAL INFORMATION', 'context_before': 'I(X; ˆY |Y ) = I(X; ˆY ) − I( ˆY ; Y ). ', 'modified_lines': 'Our proof is provided in Appendix A. Theorem 1 proves that CMI inherits the benefits of IB, in- cluding two aspects: • Minimizing I(X; ˆY ) to compress the redundant information in inputs, as well as decreas- ing the dependence between inputs and predictions. This helps to resist MIAs as shown in MID (Wang et al., 2021). • Maximizing I( ˆY ; Y ) to preserve the useful information for tasks, as well as increasing the dependence between predictions and ground truths. This helps to improve model’s utility obviously. both high-dimensional. Previous work had to estimate the variational bounds of IB (Tishby et al., 1999; Tishby & Zaslavsky, 2015; Alemi et al., 2017; Shwartz-Ziv & Tishby, 2017). Fortunately, as a special case of IB, CMI can be calculated and minimized directly, as described in the next section. 4.2 MINIMIZE CMI VIA POST-PROCESSING Previous work used CMI as a regularizer and minimized it during training models (Yang et al., 2024; Hamidi et al., 2024; Yang et al., 2025). Unlike them, we propose to minimize CMI via post- processing. ', 'original_lines': 'Our proof is provided in Appendix A. Theorem 1 proves that CMI inherits the benefits of IB, i.e. minimizing CMI has effects in two aspects: • Minimizing I(X; ˆY ) to compress the redundant information in inputs, and decrease the dependence between inputs and predictions. This helps to resist MIAs (Wang et al., 2021). • Maximizing I( ˆY ; Y ) to preserve the useful information for tasks, and increase the depen- dence between predictions and ground-truths. This helps to improve model’s performance. both high-dimensional. Prior work can only estimate variational bounds for IB (Tishby et al., 1999; Tishby & Zaslavsky, 2015; Alemi et al., 2017; Shwartz-Ziv & Tishby, 2017). Fortunately, as a special case of IB, CMI can be calculated and minimized directly, as described in the next section. 4.2 MINIMIZE CMI BY POST-PROCESSING Previous work used CMI as a loss function and minimized it during training models (Yang et al., 2024; Hamidi et al., 2024; Yang et al., 2025). Unlike them, we propose a post-processing algorithm to minimize CMI. Without the need to retrain the model, our algorithm is plug-and-play and easy to deploy. ', 'after_paragraph_idx': 24, 'before_paragraph_idx': 22}, {'section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'after_section': None, 'context_after': '(cid:80) P(ˆy|y) instead, which is equivalent to the original objective in terms of mathematical ˆy∈Y ', 'paragraph_idx': 27, 'before_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_before': 'ing (cid:80) P(ˆy|y) for each x input to f . However, this objective function is too y∈Y ', 'modified_lines': 'complex to optimize. For simplicity, we sample y ∈ Y with the probability P(y|x) and minimize ', 'original_lines': 'complex to optimize. For simplicity, we sample y ∈ Y with the probability of P(y|x) and minimize ', 'after_paragraph_idx': None, 'before_paragraph_idx': 27}, {'section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'after_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_after': 'P(ˆy|y) ≈ mean x′∈Dy ', 'paragraph_idx': 29, 'before_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_before': 'ˆy, y ∈ Y. ', 'modified_lines': 'By expressing P(ˆy|y) as a mathematical expectation, we can estimate it with the sample mean. Note that the samples in Dy are i.i.d. with X|Y = y, so we consider2 ', 'original_lines': 'By expressing P(ˆy|y) as a conditional mathematical expectation, we can estimate it by samples. Note that the samples in Dy are i.i.d. with X|Y = y, so we let ', 'after_paragraph_idx': 29, 'before_paragraph_idx': 29}, {'section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'after_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_after': 'P(ˆy|x) log ', 'paragraph_idx': 30, 'before_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_before': 'f (x′) and qy ', 'modified_lines': 'ˆy be the ˆy-th component of qy, ˆy ∈ Y. We have ', 'original_lines': 'ˆy be the ˆy-th component of qy, and then we have ', 'after_paragraph_idx': 30, 'before_paragraph_idx': 30}, {'section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'after_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_after': 'In rate-distortion theory (Shannon, 1959), minimizing mutual information under bounded distortion ¯H(x) := ', 'paragraph_idx': 30, 'before_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_before': '= KL(f (x)||qy), where KL is the Kullback-Leibler divergence, a binary convex function. ', 'modified_lines': 'To minimize KL(f (x)||qy), we fix qy for simplicity and modify f (x). Let p ∈ ∆Y be the modified output, and then our objective is KL(p||qy). To preserve the model’s utility, we add constrain ∥p − f (x)∥1 ≤ ε where ε > 0 is the distortion bound. constraint is for signal compression. If a signal has less information, it is easier to compress, and a stricter distortion bound can be applied. Inspired by their work, we introduce the normalized Shannon entropy to quantify the information in f (x), which is defined as ', 'original_lines': ' To determine the sampling probability P(y|x), a simple idea is to consider P(y|x) = P(ˆy|x) = fˆy(x) for y = ˆy ∈ Y. (5) (6) But Guo et al. (2017) have demonstrated that (6) is inaccurate for modern neural networks. Inspired by their work, we introduce temperature mechanism to calibrate it. When x is inputted to f , we minimize CMI by modifying the prediction f (x). Let p ∈ ∆Y be the modified prediction, and our objective function is KL(p||qy) based on the above derivation. To preserve the model’s utility, we constrain ∥p − f (x)∥1 ≤ ε where ε > 0 is the distortion bound. constraint is for signal compression. If a signal has less information, it is easier to compress, and a stricter distortion bound can be applied. Regarding f (x) as a signal and p as the compressed signal, we can use normalized entropy to quantify the information in f (x), which is defined as ', 'after_paragraph_idx': 31, 'before_paragraph_idx': 30}, {'section': '5.1 EXPERIMENT SETTINGS', 'after_section': None, 'context_after': '↓ Acc@1 ', 'paragraph_idx': 36, 'before_section': None, 'context_before': '2624 10.6% 2665 ', 'modified_lines': '7 ', 'original_lines': '0.79 0.86 0.99 0.88 0.95 1.31 0.78 0.79 0.79 0.75 0.88 0.80 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '7 ↓ Acc@1', 'after_section': None, 'context_after': 'Published as a conference paper at ICLR 2025 Model Dataset ', 'paragraph_idx': 49, 'before_section': None, 'context_before': '1.03 1.07 ', 'modified_lines': 'In hard-label scenarios with BREP and LOKT attacks, we provided a quantitive results in Table 2. Note that LOKT is the SOTA black-box attack method. It demonstrates very high attack perfor- mance across various kinds of settings. While previous defenses only showed limited defensive capabilities, our SSD almost completely defeats this attack. Especially in the attack against IR-152 with FaceScrub dataset, without any defense, LOKT showed an attack accuracy of up to 83.0%. However, our defense method reduce it to only 1.8%, making it almost impossible to launch a suc- cessful attack. Moreover, our defense largely enhance the feature distance σf ace from 0.66 to 1.53, which indicate that our defense method make the attack failed to capture the privacy characteristics. Table 2: MIA robustness against hard-label attacks. ', 'original_lines': 'In hard-label scenarios with BREP and LOKT attacks. We provided a quantitive results in Table 2. Note that LOKT is the SOTA black-box attack method. It demonstrates very high attack performance 7 across various kinds of settings. While previous defenses only showed limited defensive capabilities, our approach almost completely defeats this attack. Especially in the attack against IR-152 with FaceScrub dataset, without any defense, LOKT showed an attack accuracy of up to 82.9%. However, our defense method reduce it to only 1.7%, making it almost impossible to launch a successful attack. Moreover, our defense largely enhance the feature distance σf ace from 0.66 to 1.53, which indicate that our defense method make the attack failed to capture the privacy characteristics. Table 2: Experiment results against hard-label attacks. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '7 ↓ Acc@1', 'after_section': None, 'context_after': '8 Published as a conference paper at ICLR 2025 Table 3: Evaluation results on model’s utility. ', 'paragraph_idx': 55, 'before_section': '7 ↓ Acc@1', 'context_before': 'Figure 2: Visual comparison of reconstructed images using various black-box attack methods against an IR-152 model trained on CelebA, evaluated under different defense strategies. The top row ', 'modified_lines': 'displays the images of the target class from the private train dataset for reference. Label 14 65 17 Training Examples Attack Defense Mirror C2F BREP LOKT Mirror C2F BREP LOKT Mirror C2F BREP LOKT None MID BiDO LS TL SSD Visualization results of the reconstructed images with different defenses under different black-box attacks are shown in Fig. 2. Compared to previous approaches, our SSD produces reconstructed images that deviate more significantly from the private images, demonstrating its effectiveness in in- creasing the challenge for attackers to extract sensitive visual features and thereby enhancing privacy protection. ', 'original_lines': 'displays the ground truth images of the target class from the private train dataset for reference. Visualization results of the reconstructed images with different defense methods under different black-box attacks are shown in Fig. 2. Compared to previous approaches, our defense strategy Figure 3: Ablation Study on temperature T and distortion bound ε. produces reconstructed images that deviate more significantly from the private images, demonstrat- ing its effectiveness in increasing the challenge for attackers to extract sensitive visual features and thereby enhancing privacy protection. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 55}, {'section': '7 ↓ Acc@1', 'after_section': '7 ↓ Acc@1', 'context_after': '5.3 ABLATION STUDIES In this section, we conduct ablation experiments to explore the effects of the temperature and distor- 9 Published as a conference paper at ICLR 2025 6 CONCLUSION ', 'paragraph_idx': 54, 'before_section': None, 'context_before': '1.96 0.74 ', 'modified_lines': 'The evaluation results for the target model’s utility are presented in Table 3. The results indicate that our SSD holds the best utility, outperforming all competitors across different metrics, training datasets and model structures. According to our bounded distortion constraint, our Max L1 ≤ ε always holds strictly, where the competitors’ are close to the maximum of 2. In particular, our Avg L1 is only 1/5 to 1/2 of the competitors’. tion bound in our SSD. The target model is IR-152 trained on FaceScrub. The results are shown in Figure 3. Figure 3: Ablation Study on temperature T and distortion bound ε. Figure (a)(b) show the results on temperature T , where the attack accuracy is measured on BREP. It can be seen that as the temperature T rises, our MIA robustness becomes stronger. This is be- 0.020.030.040.050.060.070.08w/o T.(a) Temperature T010.020.030.040.050.0Attack Accuracy%Acc@[email protected] Distanceface0.850.900.951.001.051.101.15(c) Distortion Bound 010.020.030.040.050.0Attack Accuracy%Acc@[email protected] Distanceface0.020.030.040.050.060.070.08w/o T.(b) Temperature T95.596.096.597.097.598.0Test Accuracy%Acc0.00.20.40.60.81.01.2Prediction BiasMax L1Avg L10.850.900.951.001.051.101.15(d) Distortion Bound 95.596.096.597.097.598.0Test Accuracy%Acc0.00.20.40.60.81.01.2Prediction BiasMax L1Avg L1Max L1 w/o Ada.Avg L1 w/o Ada. cause the sampling probability in Algorithm 1 is closer to the uniform distribution, which makes it easier to return misleading labels to hard-label attackers. However, high temperature impairs the model’s utility. In particular, the “w/o T.” in Figure (a)(b) represents the case without temperature mechanism. In that case, neither MIA robustness nor model’s utility is good, which demonstrates the necessity of introducing a temperature mechanism. For the distortion bound, the results are displayed in Figure (c)(d). The attack accuracy is measured on Mirror. As the distortion bound goes up, our defense can make more modifications to the output, resulting in better MIA robustness. It can be seen that relaxing the distortion bound mainly affects the maximum distortion Max L1, while having almost no effect on the average distortion Avg L1. Especially, without the adaptive mechanism, our Avg L1 would become as high as other defenses. This demonstrates the necessity of introducing the adaptive mechanism. ', 'original_lines': 'The evaluation results for the target model’s utility are presented in Table 3. The results indicate that our defense holds the best utility, outperforming all competitors across different metrics, training datasets and model structures. Thanks to our bounded distortion constraint, our Max L1 ≤ ε always holds strictly, where the competitors’ are close to the maximum of 2. In particular, thanks to our adaptive rate-distortion, our Avg L1 is only 1/5 to 1/2 of the competitors’. tion bound in our defense. The target model is IR-152 trained on FaceScrub. Figure 3 shows the experimental results on temperature, where the attack accuracy is measured on BREP. It can be seen that as the temperature T rises, our MIA robustness becomes stronger. This is because the y in Algorithm 1 is closer to uniform distribution, which makes it easier to return misleading labels to hard-label attackers. However, high temperature impairs the model’s utility. In particular, the “w/o T.” in Figure (a)(b) represents the case without temperature mechanism. In that case, neither MIA robustness nor model’s utility is ideal, which demonstrates the necessity of introducing a temperature mechanism. For the distortion bound, the experiment results are displayed in Figure (c)(d). The attack accuracy is measured on Mirror. As the distortion bound goes up, our defense can make more alterations to the output, resulting in better MIA robustness. It can be seen that relaxing the distortion bound mainly affects the maximum distortion Max L1, while having almost no effect on the average distortion Avg L1. Especially, without the adaptive mechanism, our Avg L1 would become as high as that of other defenses. This demonstrates the necessity of introducing the adaptive mechanism. ', 'after_paragraph_idx': 54, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Xinhao Zhong, Hao Fang, Bin Chen, Xulin Gu, Tao Dai, Meikang Qiu, and Shu-Tao Xia. Hierar- chical features matter: A deep exploration of gan priors for improved dataset distillation. arXiv ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Xinhao Zhong, Bin Chen, Hao Fang, Xulin Gu, Shu-Tao Xia, and En-Hui Yang. Going beyond fea- ture similarity: Effective dataset distillation based on class-aware conditional mutual information. arXiv preprint arXiv:2412.09945, 2024a. ', 'modified_lines': '', 'original_lines': ' 12 Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.1 EXPERIMENT SETTINGS', 'after_section': None, 'context_after': 'Table 8: The MIA robustness of all defenses under Mirror attack on high resolution. ', 'paragraph_idx': 39, 'before_section': None, 'context_before': 'F EXPERIMENTS ON HIGH RESOLUTION To adapt to high resolution, we choose Mirror as the attacker. The prior distribution is StyleGAN2 ', 'modified_lines': 'trained on FFHQ with a resolution of 1024 × 1024. The generated images are center-cropped to 800 × 800, resized to 224 × 224, and inputted to the target model. The target model is ResNet-152, and the evaluation model is Inception-v3. The first 10 classes of FaceScrub are attacked, and for each class, we reconstruct 5 images. The attack results are shown in Table 8 and the models’ utility are shown in Table 9. Although models are more vulnerable on high resolution, our defense still achieves the best MIA robustness, with a good utility. ', 'original_lines': 'trained on FFHQ with a resolution of 1024×1024. The generated images are center-cropped to 800×800, resized to 224×224, and inputted to the target model. The target model is ResNet-152 and the evaluation model is Inception-v3. The first 10 classes of FaceScrub are attacked and each class reconstructed 5 images. The attack results are shown in Table 8 and the models’ utility are shown in Table 9. Although models are more vulnerable on high resolution, our defense still achieves the best MIA robustness, with a good utility. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-02 11:48:05
ICLR.cc/2025/Conference
1QW0CqZ2JY
ch0RcbdIEU
[]
2025-03-02 11:58:34
ICLR.cc/2025/Conference
ch0RcbdIEU
Q50GiYqOu4
[{'section': '2.2', 'after_section': None, 'context_after': 'Tianqu Zhuang1*, Hongyao Yu2*, Yixiang Qiu1*, Hao Fang1*, Bin Chen2#, Shu-Tao Xia1 1Shenzhen International Graduate School, Tsinghua University, China ', 'paragraph_idx': 11, 'before_section': None, 'context_before': 'STEALTHY SHIELD DEFENSE: A CONDITIONAL ', 'modified_lines': 'MUTUAL INFORMATION-BASED POST-PROCESSING AGAINST BLACK-BOX MODEL INVERSION ATTACKS ', 'original_lines': 'MUTUAL INFORMATION-BASED APPROACH AGAINST BLACK-BOX MODEL INVERSION ATTACKS ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'https://github.com/ZhuangQu/Stealthy-Shield-Defense. 1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'state-of-the-art white-box attacks, and existing defenses cannot resist them ef- fectively. To fill this gap, we propose Stealthy Shield Defense (SSD), a post- processing algorithm against black-box MIAs. Our idea is to modify the model’s ', 'modified_lines': 'outputs to minimize the conditional mutual information (CMI). We mathemat- ically prove that CMI is a special case of information bottlenecks (IB), and thus inherits the advantages of IB—making predictions less dependent on inputs and more dependent on ground truths. This theoretically guarantees our effec- tiveness, both in resisting MIAs and preserving utility. For minimizing CMI, we formulate a convex optimization problem and solve it via the water-filling method. Adaptive rate-distortion is introduced to constrain the modification to the outputs, and the water-filling is implemented on GPUs to address computa- tion cost. Without the need to retrain the model, our algorithm is plug-and-play and easy to deploy. Experimental results indicate that SSD outperforms existing defenses, in terms of MIA resistance and model’s utility, across various attack algorithms, training datasets, and model architectures. Our code is available at ', 'original_lines': 'outputs to minimize the conditional mutual information (CMI). We mathemati- cally prove that CMI is a special case of information bottlenecks (IB), and thus inherits the advantages of IB—making predictions less dependent on inputs and more dependent on ground truths. This theoretically guarantees our effective- ness, both in resisting MIAs and preserving utility. For minimizing CMI, we for- mulate a convex optimization problem and solve it via the water-filling method. Adaptive rate-distortion is introduced to constrain the modification to the out- puts, and the water-filling is implemented on GPUs to address computational cost. Without the need to retrain the model, our algorithm is plug-and-play and easy to deploy. Experimental results indicate that SSD outperforms existing de- fenses, in terms of MIA resistance and model’s utility, across various attack al- gorithms, training datasets, and model architectures. Our code is available at ', 'after_paragraph_idx': 2, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'and dataset distillation (Zhong et al., 2024b;a). However, their integration with sensitive training data has raised concerns about privacy breaches. Recent studies (Fang et al., 2024b;a; 2025) have explored various attack methods to probe these privacy, such as gradient inversion (Fang et al., 2023; inversion attacks (MIAs) aim to reconstruct the private training data by accessing a public model, system with a publicly accessible interface. Through carefully crafted malicious queries, model in- version attackers can infer the sensitive facial images stored in the system, along with the associated user identities. ', 'paragraph_idx': 3, 'before_section': None, 'context_before': 'INTRODUCTION Deep neural networks (DNNs) have driven widespread deployment in multiple mission-critical do- ', 'modified_lines': 'mains, such as computer vision (He et al., 2015), natural language processing (Devlin et al., 2019) Yu et al., 2024b) and membership inference (Hu et al., 2021). Among the emergent threats, model posing the greatest risk (Qiu et al., 2024b). For instance, consider a face recognition access control ', 'original_lines': 'mains, such as computer vision (He et al., 2016), natural language processing (Devlin et al., 2019) Yu et al., 2024b) and membership inference (Hu et al., 2022). Among the emergent threats, model posing the greatest risk (Qiu et al., 2024c). For instance, consider a face recognition access control ', 'after_paragraph_idx': 3, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 4, 'before_section': '1 INTRODUCTION', 'context_before': 'more common. As models grow larger nowadays, they are mostly stored on servers and can only be accessed online, which are typical black-box scenarios. (2) Black-box attacks are more powerful. The latest soft-label attack RLBMI (Han et al., 2023) and hard-label attack LOKT (Nguyen et al., ', 'modified_lines': '2023) have outperformed the state-of-the-art white-box attacks. (3) Existing defenses cannot resist ', 'original_lines': '2024) have outperformed the state-of-the-art white-box attacks. (3) Existing defenses cannot resist ', 'after_paragraph_idx': None, 'before_paragraph_idx': 4}, {'section': '2.1 MODEL INVERSION ATTACKS AND DEFENSES', 'after_section': '2.1 MODEL INVERSION ATTACKS AND DEFENSES', 'context_after': 'Yuan et al., 2023; Qiu et al., 2024a) and black-box. We focus on black-box MIAs, where attackers can only query the model and obtain outputs. In this scenario, BREP (Kahla et al., 2022) utilizes zero-order optimization to drive the latent vectors away from the decision boundary. Mirror (An trains multiple surrogate models and applies white-box attacks to them. 2024) freezes some layers of the model, and CA-FaCe (Yu et al., 2024a) change the structure of the model. However, black-box attackers only exploit the outputs, and thus are rarely hindered. The defense against black-box MIAs is still limited. ', 'paragraph_idx': 8, 'before_section': None, 'context_before': '2.1 MODEL INVERSION ATTACKS AND DEFENSES Model inversion attacks (MIAs) are a serious privacy threat to released models (Fang et al., 2024c). ', 'modified_lines': 'MIAs are categorized as white-box (Zhang et al., 2019; Chen et al., 2020; Struppek et al., 2022; et al., 2022) and C2F (Ye et al., 2024b) explore genetic algorithms. LOKT (Nguyen et al., 2023) To address the threat of MIAs, a variety of defenses have been proposed. MID (Wang et al., 2020), BiDO (Peng et al., 2022), and LS (Struppek et al., 2023) change the training losses, TL (Ho et al., ', 'original_lines': 'MIAs are categorized as white-box (Zhang et al., 2020; Chen et al., 2021; Struppek et al., 2022; et al., 2022) and C2F (Ye et al., 2023) explore genetic algorithms. LOKT (Nguyen et al., 2024) To address the threat of MIAs, a variety of defenses have been proposed. MID (Wang et al., 2021), BiDO (Peng et al., 2022) and LS (Struppek et al., 2024) change the training losses, TL (Ho et al., ', 'after_paragraph_idx': 8, 'before_paragraph_idx': None}, {'section': '2.2', 'after_section': '2.2', 'context_after': 'model should compress the redundant information in inputs while preserving the useful information for tasks. They later highlighted that information is compressed layer-by-layer in DNNs (Tishby & Zaslavsky, 2015; Shwartz-Ziv & Tishby, 2017). Alemi et al. (2017) proposed Variational Infor- Mutual Information-based Defense (MID). Yang et al. (2024) proposed to use conditional mutual information (CMI) as a performance metric for DNNs, providing the calculation formula and geometric interpretation of CMI. By minimizing CMI, they improve classifiers (Yang et al., 2025) and address class imbalance (Hamidi et al., 2024). By (Yang & Ye, 2024). In this paper, we theoretically prove that CMI is a special case of IB and thus inherits the advantages ', 'paragraph_idx': 11, 'before_section': '2.2', 'context_before': 'INFORMATION BOTTLENECK AND CONDITIONAL MUTUAL INFORMATION ', 'modified_lines': 'Tishby et al. (2000) proposed the Information Bottleneck (IB) principle: a good machine learning mation Bottleneck (VIB) to estimate the bounds of IB, and Wang et al. (2020) applied VIB in their maximizing CMI, they improve knowledge distillation (Ye et al., 2024a) and address nasty teachers ', 'original_lines': 'Tishby et al. (1999) proposed the Information Bottleneck (IB) principle: a good machine learning mation Bottleneck (VIB) to estimate the bounds of IB, and Wang et al. (2021) applied VIB in their maximizing CMI, they improve knowledge distillation (Ye et al., 2024) and address nasty teachers ', 'after_paragraph_idx': 11, 'before_paragraph_idx': 11}, {'section': '3.1 NOTATION', 'after_section': '3.1 NOTATION', 'context_after': 'Let ∆Y be the probability simplex with |Y| vertices. Let f (x) ∈ ∆Y be the output from the softmax f (x) = arg max fˆy(x). ', 'paragraph_idx': 14, 'before_section': None, 'context_before': '3.1 NOTATION ', 'modified_lines': 'Let f : X → Y be a neural classifier, X ∈ X be an input to f , Y ∈ Y be the ground truth label, ˆY ∈ Y be the label predicted by f , and Z ∈ Z be the intermediate representation in f . Note that Y → X → Z → ˆY is a Markov chain. Let P be the probability function and P(x) := P{X = x}, P(y) := P{Y = y}, P(x, ˆy|y) := P{X = x, ˆY = ˆy | Y = y}, etc. Note that P(x, y) is the private data distribution. layer of f when x is input to f , and fˆy(x) ∈ [0, 1] be the ˆy-th component of f (x), ˆy ∈ Y. Note that ', 'original_lines': 'Let f : X → Y be a neural classifier, X ∈ X be the input to f , Y ∈ Y be the ground truth label, ˆY ∈ Y be the label predicted by f , and Z ∈ Z be the intermediate feature in f . Note that Y → X → Z → ˆY is a Markov chain. Let P be the probability function and, for brevity, let P(x) := P{X = x}, P(y) := P{Y = y}, P(x, ˆy|y) := P{X = x, ˆY = ˆy | Y = y}, etc. layer of f when x is input to f , and fˆy(x) be the ˆy-th component of f (x), ˆy ∈ Y. Note that ', 'after_paragraph_idx': 14, 'before_paragraph_idx': None}, {'section': '3.2 MODEL INVERSION ATTACKS', 'after_section': '3.2 MODEL INVERSION ATTACKS', 'context_after': 'Hard-label: Attackers can query any x ∈ X and obtain f (x) ∈ Y. Soft-label: Attackers can query any x ∈ X and obtain f (x) ∈ ∆Y. White-box: Attackers know the details of f . I(X; ˆY ) := ', 'paragraph_idx': 16, 'before_section': None, 'context_before': '3.2 MODEL INVERSION ATTACKS ', 'modified_lines': 'Let D ⊆ X × Y be the dataset learned by f . Note that the samples in D are i.i.d. to P(x, y). MIAs aim to reconstruct ˆD as close to D as possible. Based on the access to f , MIAs are categorized as: Hard-label and soft-label, collectively called black-box,1 are defended against in this paper. 3.3 MUTUAL INFORMATION-BASED DEFENSE (MID) Wang et al. (2020) proposed to resist MIAs by reducing the dependence between X and ˆY . The dependence is quantified by the mutual information, which is defined as ', 'original_lines': 'Let D ⊆ X × Y be the dataset learned by f . MIAs aim to reconstruct ˆD as close to D as possible. According to the access to f , MIAs are categorized as: Hard-label and soft-label, collectively called black-box,1 are what we aim to defend against. 3.3 DEFENSE VIA MUTUAL INFORMATION Wang et al. (2021) proposed Mutual Information-based Defense (MID). The mutual information between X and ˆY is defined as ', 'after_paragraph_idx': 16, 'before_paragraph_idx': None}, {'section': '3.3 MUTUAL INFORMATION-BASED DEFENSE (MID)', 'after_section': None, 'context_after': '3 Published as a conference paper at ICLR 2025 (2) 4 METHODOLOGY ˆDy as close to Dy as possible. Against their intention, we propose to minimize I(X; ˆY |Y = y) := ', 'paragraph_idx': 18, 'before_section': '3.3 MUTUAL INFORMATION-BASED DEFENSE (MID)', 'context_before': '(1) ', 'modified_lines': 'They reduced I(X; ˆY ) to prevent attackers from inferring the information of D. However, low I(X; ˆY ) hurts the model’s utility. Especially, I(X; ˆY ) = 0 iff X and ˆY are independent, in which case f is immune to any attack but useless at all. 1Some literature refers to hard-label as label-only, and soft-label as black-box. As an alternative, they introduced the information bottleneck (IB), which is defined as I(X; Z) − λ · I(Z; Y ) where λ > 0. They used it as a regularizer to train f , minimizing I(X; Z) to resist MIAs while maximizing I(Z; Y ) to preserve the model’s utility. 4.1 CONDITIONAL MUTUAL INFORMATION-BASED DEFENSE We aim to resist black-box MIAs where attackers cannot access Z, so we still minimize I(X; ˆY ) instead of I(X; Z). Furthermore, we observe that all MIA algorithms target one fixed label. Formally, let Dy := {x ∈ X : (x, y) ∈ D} be the sub-dataset whose ground truth label is y. For a given y ∈ Y, all attackers aim to reconstruct ', 'original_lines': 'I(X; ˆY ) quantifies the dependence between X and ˆY . They minimize it to prevent attackers from obtaining the information about D. However, minimizing I(X; ˆY ) hurts the model’s utility. Es- pecially, I(X; ˆY ) = 0 iff X and ˆY are independent, in which case f is immune to any attack but useless at all. 1Some literature refer to hard-label as label-only, and soft-label as black-box. As an alternative, they introduced information bottlenecks (IB), which is defined as I(X; Z) − λ · I(Z; Y ), where λ > 0. They use (2) as a regularizer to train f , minimizing I(X; Z) to resist MIAs while maximizing I(Z; Y ) to preserve model’s utility. 4.1 DEFENSE VIA CONDITIONAL MUTUAL INFORMATION We aim to resist black-box MIAs, so we still focus on ˆY rather than Z. Furthermore, we observe that all MIA algorithms target one fixed label during attacking. Formally, let Dy := {x ∈ X | (x, y) ∈ D} be the sub-dataset whose ground truth label is y. Given y ∈ Y, all MIA algorithms aim to reconstruct ', 'after_paragraph_idx': None, 'before_paragraph_idx': 18}, {'section': '3.3 MUTUAL INFORMATION-BASED DEFENSE (MID)', 'after_section': None, 'context_after': 'I(X; ˆY |Y ) := ', 'paragraph_idx': 18, 'before_section': None, 'context_before': '(3) I(X; ˆY |Y = y) quantifies the dependence between X and ˆY when Y = y. We minimize it to ', 'modified_lines': 'prevent attackers from inferring the information of Dy. To protect the complete D, we minimize (3) for each y ∈ Y with the weight of P(y). It is equivalent to minimizing the conditional mutual information (CMI), which is defined as ', 'original_lines': 'prevent attackers from obtaining the information about Dy. Minimizing (3) on each y ∈ Y is equivalent to minimizing the conditional mutual information (CMI), which is defined as ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 CONDITIONAL MUTUAL INFORMATION-BASED DEFENSE', 'after_section': '4.1 CONDITIONAL MUTUAL INFORMATION-BASED DEFENSE', 'context_after': 'I(X; ˆY |Y ) = I(X; ˆY ) − I( ˆY ; Y ). 4.2 MINIMIZE CMI VIA POST-PROCESSING 4 Published as a conference paper at ICLR 2025 I(X; ˆY |Y ) = P(y) (cid:88) y∈Y x∈X ', 'paragraph_idx': 22, 'before_section': '4.1 CONDITIONAL MUTUAL INFORMATION-BASED DEFENSE', 'context_before': '(4) ', 'modified_lines': 'Theorem 1. CMI is a special case of the information bottleneck (2) when Z = ˆY and λ = 1, i.e. Our proof is provided in Appendix A. Our theorem proves that CMI inherits the benefits of IB in two aspects: • Minimize I(X; ˆY ) to compress the redundant information in inputs, and decrease the de- pendence between inputs and predictions. It improves the resistance to MIAs as shown by Wang et al. (2020). • Maximize I( ˆY ; Y ) to preserve the useful information for tasks, and increase the depen- dence between predictions and ground truths. It improves the utility obviously. I(X; Z) in IB is challenging to calculate because the input space X and representation space Z are both high-dimensional. Previous work could only approximate IB by variational bounds (Alemi et al., 2017). Fortunately, as a special case of IB, CMI can be calculated directly (Yang et al., 2024). Previous work used CMI as a regularizer and minimized it during training models (Yang et al., 2024; Hamidi et al., 2024; Yang et al., 2025). In contrast to them, we minimize CMI via post-processing. We transform CMI as follows: (cid:88) (cid:88) ', 'original_lines': 'Theorem 1. CMI is a special case of information bottlenecks (IB) when Z = ˆY and λ = 1, i.e. Our proof is provided in Appendix A. Theorem 1 proves that CMI inherits the benefits of IB, in- cluding two aspects: • Minimizing I(X; ˆY ) to compress the redundant information in inputs, as well as decreas- ing the dependence between inputs and predictions. This helps to resist MIAs as shown in MID (Wang et al., 2021). • Maximizing I( ˆY ; Y ) to preserve the useful information for tasks, as well as increasing the dependence between predictions and ground truths. This helps to improve model’s utility obviously. The I(X; Z) in (2) is challenging to calculate because the input space X and feature space Z are both high-dimensional. Previous work had to estimate the variational bounds of IB (Tishby et al., 1999; Tishby & Zaslavsky, 2015; Alemi et al., 2017; Shwartz-Ziv & Tishby, 2017). Fortunately, as a special case of IB, CMI can be calculated and minimized directly, as described in the next section. Previous work used CMI as a regularizer and minimized it during training models (Yang et al., 2024; Hamidi et al., 2024; Yang et al., 2025). Unlike them, we propose to minimize CMI via post- processing. CMI can be calculated as follows: (cid:88) (cid:88) ', 'after_paragraph_idx': 22, 'before_paragraph_idx': 22}, {'section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'after_section': None, 'context_after': '(cid:80) P(ˆy|y) = (cid:88) x∈X P(x, ˆy|y) = (cid:88) x∈X x∈X x′∈Dy fˆy(x′), P(ˆy|x) log ', 'paragraph_idx': 27, 'before_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_before': ', ', 'modified_lines': 'by Markov chain Y → X → ˆY . Thus minimizing I(X; ˆY |Y ) is equivalent to minimizing (cid:80) ˆy∈Y P(ˆy|x) log P(ˆy|x) P(ˆy|y) for each x input to f . For simplicity, we sample y ∈ Y with the probability of P(y|x) and minimize It is equal to the original objective in terms of mathematical y∈Y P(y|x) (cid:80) ˆy∈Y P(ˆy|x) log P(ˆy|x) P(ˆy|y) instead.2 expectation. Next we need P(ˆy|x), P(y|x) and P(ˆy|y). To get P(ˆy|x), we have P(ˆy|x) = fˆy(x) by the design of neural classifiers. To get P(y|x), an intuitive idea is that P(y|x) = P(ˆy|x) for y = ˆy. But Guo et al. (2017) have demonstrated that it is inaccurate in modern neural networks. Inspired by their work, we introduce the temperature mechanism to adjust it. To get P(ˆy|y), we have = (cid:88) x∈X P(x|y)P(ˆy|x, y) = (cid:88) P(x|y)P(ˆy|x), P(x|y)fˆy(x) = EX|Y =y[fˆy(X)] ≈ mean where the “≈” is based on the fact that the samples in Dy are i.i.d. to P(x|y), and thus the sample mean can estimate the conditional expectation. In practice we use the validation set as Dy, because the training samples are overfitted by f , causing inaccurate estimation. Now the objective becomes ', 'original_lines': 'by Markov Y → X → ˆY . P(ˆy|x) log P(ˆy|x) P(y|x) (cid:80) ˆy∈Y Based on the above mathematical transformation, minimizing I(X; ˆY |Y ) is equivalent to minimiz- ing (cid:80) P(ˆy|y) for each x input to f . However, this objective function is too y∈Y complex to optimize. For simplicity, we sample y ∈ Y with the probability P(y|x) and minimize P(ˆy|y) instead, which is equivalent to the original objective in terms of mathematical ˆy∈Y expectation. Next, we find a way to calculate P(ˆy|x) and P(ˆy|y). P(ˆy|x) log P(ˆy|x) We consider P(ˆy|x) = fˆy(x) according to the design of neural classifiers. Note that P(x|y)P(ˆy|x, y) = P(x|y)P(ˆy|x) = (cid:88) (cid:88) x∈X P(x|y)fˆy(x), = EX|Y =y[fˆy(X)], ˆy, y ∈ Y. By expressing P(ˆy|y) as a mathematical expectation, we can estimate it with the sample mean. Note that the samples in Dy are i.i.d. with X|Y = y, so we consider2 P(ˆy|y) ≈ mean ˆy, y ∈ Y. Let qy := mean x′∈Dy f (x′) and qy ˆy be the ˆy-th component of qy, ˆy ∈ Y. We have ', 'after_paragraph_idx': None, 'before_paragraph_idx': 27}, {'section': '5.1 SETTINGS', 'after_section': '5.1 SETTINGS', 'context_after': 'All experiments are conducted by MIBench (Qiu et al., 2024b). None MID ', 'paragraph_idx': 41, 'before_section': '5.1 SETTINGS', 'context_before': 'to evaluate the utility of the target model with defense. ', 'modified_lines': '• Distortion. This metric is used to quantify the modification to the predicted probability vectors by defenses. We take the L1 distance between the outputs with and without defense. It is denoted as “dist”. 5.2 COMPARISON WITH STATE-OF-THE-ART DEFENSES Table 1: MIA resistance of various defenses under soft-label attacks. Mirror C2FMI ↓ acc ↓ acc5 ↑ σeval 10.0% 18.8% 2526 9.0% 17.6% 2448 4.8% 11.4% 2758 3.2% 7.8% 2602 6.6% 14.4% 2613 1.2% 3.0% 2527 ', 'original_lines': '• Prediction Bias. This metric is used to quantify the modification to the predicted proba- bility vectors by defense methods. We take the L1 distance between the outputs with and without defense. Avg L1 is the average over private test samples, and Max L1 is the largest one. Lower values of both suggest that the defense method causes less modification to the outputs. 5.2 COMPARISON WITH PREVIOUS STATE-OF-THE-ART DEFENSES In this section, we evaluate the robustness of our defense by comparing it against an undefended model and prior state-of-the-art defenses, including MID (Wang et al., 2021), BiDO (Peng et al., 2022), LS (Struppek et al., 2024) and TL (Ho et al., 2024). We adhere to the official configurations for each defense method, and the corresponding hyperparameters are detailed in Appendix B. We evaluate the MIA robustness under various black-box MIAs, including both soft-label and hard- label attacks. We conduct experiments on different target models and private datasets to demonstrate that our approach performs effectively across diverse scenarios. For soft-label attacks, we compare our method with previous defense strategies under the Mirror and C2FMI attacks. The attack results are listed in Table 1. We can observe that our SSD achieves significant improvements over existing defense strategies, especially when the attack has a strong performace. Specifically, under the Mirror attack against IR-152 trained on the FaceScrub dataset, our method reduces the attack accuracy from 52.4% to 19.4%, achieving a 3.6% greater reduction compared to the previous SOTA method TL. For C2FMI attacks against VGG16 models trained on the FaceScrub dataset, our method reduces the attack accuracy to approximately 1/9 of that without defense, which is only a quarter of the accuracy achieved under the TL defense. Table 1: MIA robustness against soft-label attacks. Model Dataset Defense ↓ Acc@1 IR-152 CelebA IR-152 FaceScrub VGG-16 FaceScrub ', 'after_paragraph_idx': 42, 'before_paragraph_idx': 41}, {'section': '5.2 COMPARISON WITH STATE-OF-THE-ART DEFENSES', 'after_section': None, 'context_after': 'MID LS TL SSD ', 'paragraph_idx': 43, 'before_section': None, 'context_before': 'SSD None ', 'modified_lines': '39.6% 63.2% 2135 40.0% 61.2% 2152 BiDO 31.0% 55.6% 2168 28.8% 56.8% 2286 31.2% 51.6% 2175 22.8% 35.8% 2753 ', 'original_lines': 'BiDO ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.2 COMPARISON WITH STATE-OF-THE-ART DEFENSES', 'after_section': None, 'context_after': 'F EXPERIMENTS ON HIGH RESOLUTION To adapt to high resolution, we choose Mirror as the attacker. The prior distribution is StyleGAN2 trained on FFHQ with a resolution of 1024 × 1024. The generated images are center-cropped to achieves the best MIA robustness, with a good utility. MID LS TL 195 183 194 ', 'paragraph_idx': 44, 'before_section': None, 'context_before': '0.82 1.26 ', 'modified_lines': 'It can be seen that our SSD is superior to other defenses. 800 × 800 and resized to 224 × 224. The target models are ResNet-152 trained on FaceScrub, and the evaluation model is an Inception-v3. Since high resolution is computationally expensive, we only attack the first 10 labels and reconstruct 5 images for each label. The attack results are shown in Table 8 and the models’ utility and settings are shown in Table 9. It can be seen that our SSD still Table 8: MIA resistance of various defenses under high-resolution Mirror attack. ↓ acc ↓ acc5 None 94% 70% 90% 62% BiDO 66% 86% 82% 48% 58% 92% 42% 66% SSD ↑ σeval ', 'original_lines': 'It can be seen that our defense has the best MIA robustness against RLB. The models’ utility and defenses’ settings are consistent with the Tables 3-4, which shows that we also preserve the best model’s utility. 800 × 800, resized to 224 × 224, and inputted to the target model. The target model is ResNet-152, and the evaluation model is Inception-v3. The first 10 classes of FaceScrub are attacked, and for each class, we reconstruct 5 images. The attack results are shown in Table 8 and the models’ utility are shown in Table 9. Although models are more vulnerable on high resolution, our defense still Table 8: The MIA robustness of all defenses under Mirror attack on high resolution. No Defense BiDO SSD (ours) ↓ Acc@1 ↓ Acc@5 70% 62% 66% 48% 58% 42% 94% 90% 86% 82% 92% 66% ↑ δeval ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-16 11:49:18
ICLR.cc/2025/Conference
Q50GiYqOu4
lfxUG2QmBw
[{'section': '2.2', 'after_section': None, 'context_after': 'Tianqu Zhuang1*, Hongyao Yu2*, Yixiang Qiu1*, Hao Fang1*, Bin Chen2#, Shu-Tao Xia1 1Shenzhen International Graduate School, Tsinghua University, China ', 'paragraph_idx': 11, 'before_section': None, 'context_before': 'STEALTHY SHIELD DEFENSE: A CONDITIONAL ', 'modified_lines': 'MUTUAL INFORMATION-BASED APPROACH AGAINST BLACK-BOX MODEL INVERSION ATTACKS ', 'original_lines': 'MUTUAL INFORMATION-BASED POST-PROCESSING AGAINST BLACK-BOX MODEL INVERSION ATTACKS ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'black-box attacks effectively. Existing defenses focus on modifying the weights and structure of the model, but black-box attackers only exploit the outputs, and thus are less susceptible. ', 'paragraph_idx': 6, 'before_section': None, 'context_before': 'Published as a conference paper at ICLR 2025 ', 'modified_lines': 'Figure 1: An overview of Stealthy Shield Defense. The probability simplex is a triangle when the number of classes is three. CMI is defined as I(X; ˆY |Y ). According to our Theorem 1, minimizing CMI makes the mutual information I(X; ˆY ) minimized and I( ˆY ; Y ) maximized. As shown by Yang et al. (2024), minimizing CMI makes the outputs more concentrated class-wisely. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'The contributions of this paper are: • We introduce CMI into model inversion defense for the first time, and theoretically prove ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'theoretical guarantee, SSD achieves a better trade-off between MIA resistance and model’s utility. Without the need to retrain the model, SSD is plug-and-play and easy to deploy. ', 'modified_lines': '', 'original_lines': 'Figure 1: An overview of Stealthy Shield Defense. The probability simplex is a triangle when the number of classes is three. CMI is defined as I(X; ˆY |Y ). According to our Theorem 1, minimizing CMI makes the mutual information I(X; ˆY ) minimized and I( ˆY ; Y ) maximized. As shown by Yang et al. (2024), minimizing CMI makes the outputs more concentrated class-wisely. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'and model’s utility, exhibiting good generalizability across various attack algorithms, train- ing datasets, and model architectures. 2.1 MODEL INVERSION ATTACKS AND DEFENSES ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': 'distortion is introduced to constrain the modification to the outputs. We speed up our algorithm by GPU-based water-filling method as well. ', 'modified_lines': '• Our experiments indicate that we outperform all competitors, in terms of MIA resistance 2 RELATED WORKS ', 'original_lines': '• Our experiments indicate that we outperform all competitors, in terms of MIA-resistance 2 RELATED WORK ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 6}, {'section': '3.3 MUTUAL INFORMATION-BASED DEFENSE (MID)', 'after_section': '3.3 MUTUAL INFORMATION-BASED DEFENSE (MID)', 'context_after': 'I(X; ˆY ) hurts the model’s utility. Especially, I(X; ˆY ) = 0 iff X and ˆY are independent, in which case f is immune to any attack but useless at all. ', 'paragraph_idx': 18, 'before_section': '3.3 MUTUAL INFORMATION-BASED DEFENSE (MID)', 'context_before': '(1) ', 'modified_lines': 'They reduced I(X; ˆY ) to prevent attackers from inferring the information about D. However, low ', 'original_lines': 'They reduced I(X; ˆY ) to prevent attackers from inferring the information of D. However, low ', 'after_paragraph_idx': 18, 'before_paragraph_idx': 18}, {'section': '3.3 MUTUAL INFORMATION-BASED DEFENSE (MID)', 'after_section': None, 'context_after': '(2) maximizing I(Z; Y ) to preserve the model’s utility. 4 METHODOLOGY ', 'paragraph_idx': 20, 'before_section': '3.3 MUTUAL INFORMATION-BASED DEFENSE (MID)', 'context_before': 'As an alternative, they introduced the information bottleneck (IB), which is defined as ', 'modified_lines': 'I(X; Z) − β · I(Z; Y ) where β > 0. They used (2) as a regularizer to train f , minimizing I(X; Z) to resist MIAs while ', 'original_lines': 'I(X; Z) − λ · I(Z; Y ) where λ > 0. They used it as a regularizer to train f , minimizing I(X; Z) to resist MIAs while ', 'after_paragraph_idx': None, 'before_paragraph_idx': 20}, {'section': '2.2', 'after_section': None, 'context_after': 'I(X; ˆY |Y = y) := ', 'paragraph_idx': 13, 'before_section': None, 'context_before': 'We aim to resist black-box MIAs where attackers cannot access Z, so we still minimize I(X; ˆY ) instead of I(X; Z). ', 'modified_lines': 'Furthermore, we observe that all MIA algorithms target one fixed label during attacking. Formally, let Dy := {x ∈ X : (x, y) ∈ D} be the sub-dataset with the ground truth label y. When y is given, all attackers aim to reconstruct ˆDy as close to Dy as possible. Against their intention, we propose to minimize ', 'original_lines': 'Furthermore, we observe that all MIA algorithms target one fixed label. Formally, let Dy := {x ∈ X : (x, y) ∈ D} be the sub-dataset whose ground truth label is y. For a given y ∈ Y, all attackers aim to reconstruct ˆDy as close to Dy as possible. Against their intention, we propose to minimize ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.3 MUTUAL INFORMATION-BASED DEFENSE (MID)', 'after_section': None, 'context_after': 'To protect the complete D, we minimize (3) for each y ∈ Y with the weight of P(y). It is equivalent to minimizing the conditional mutual information (CMI), which is defined as ', 'paragraph_idx': 18, 'before_section': None, 'context_before': '(3) I(X; ˆY |Y = y) quantifies the dependence between X and ˆY when Y = y. We minimize it to ', 'modified_lines': 'prevent attackers from inferring the information about Dy. ', 'original_lines': 'prevent attackers from inferring the information of Dy. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 CONDITIONAL MUTUAL INFORMATION-BASED DEFENSE', 'after_section': '4.1 CONDITIONAL MUTUAL INFORMATION-BASED DEFENSE', 'context_after': 'I(X; ˆY |Y ) = I(X; ˆY ) − I( ˆY ; Y ). ', 'paragraph_idx': 22, 'before_section': '4.1 CONDITIONAL MUTUAL INFORMATION-BASED DEFENSE', 'context_before': '(4) ', 'modified_lines': 'Theorem 1. CMI is a special case of information bottlenecks (2) when Z = ˆY and β = 1, i.e. ', 'original_lines': 'Theorem 1. CMI is a special case of the information bottleneck (2) when Z = ˆY and λ = 1, i.e. ', 'after_paragraph_idx': 22, 'before_paragraph_idx': 22}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Wang et al. (2020). • Maximize I( ˆY ; Y ) to preserve the useful information for tasks, and increase the depen- 4.2 MINIMIZE CMI VIA POST-PROCESSING Hamidi et al., 2024; Yang et al., 2025). In contrast to them, we minimize CMI via post-processing. 4 ', 'paragraph_idx': 5, 'before_section': None, 'context_before': 'two aspects: • Minimize I(X; ˆY ) to compress the redundant information in inputs, and decrease the de- ', 'modified_lines': 'pendence between inputs and predictions. It improves the MIA resistance as shown by dence between predictions and ground truths. It improves the model’s utility obviously. The I(X; Z) in IB is challenging to calculate because the input space X and representation space Z are both high-dimensional. Wang et al. (2020) could only approximate IB by variational bounds (Alemi et al., 2017). Fortunately, as a special case of IB, CMI can be calculated directly (Yang et al., 2024). Previous works used CMI as a regularizer and minimized it during training (Yang et al., 2024; ', 'original_lines': 'pendence between inputs and predictions. It improves the resistance to MIAs as shown by dence between predictions and ground truths. It improves the utility obviously. I(X; Z) in IB is challenging to calculate because the input space X and representation space Z are both high-dimensional. Previous work could only approximate IB by variational bounds (Alemi et al., 2017). Fortunately, as a special case of IB, CMI can be calculated directly (Yang et al., 2024). Previous work used CMI as a regularizer and minimized it during training models (Yang et al., 2024; ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'after_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_after': 'To get P(ˆy|x), we have P(ˆy|x) = fˆy(x) by the design of neural classifiers. To get P(y|x), an intuitive idea is that P(y|x) = P(ˆy|x) for y = ˆy. But Guo et al. (2017) have To get P(ˆy|y), we have ', 'paragraph_idx': 28, 'before_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_before': 'ˆy∈Y P(ˆy|x) log P(ˆy|x) P(ˆy|y) instead.2 ', 'modified_lines': ' expectation. Next we manage to calculate P(ˆy|x), P(y|x) and P(ˆy|y). demonstrated its inaccuracy in modern neural classifiers. Inspired by their work, we introduce the temperature mechanism to adjust it. ', 'original_lines': 'expectation. Next we need P(ˆy|x), P(y|x) and P(ˆy|y). demonstrated that it is inaccurate in modern neural networks. Inspired by their work, we introduce the temperature mechanism to adjust it. ', 'after_paragraph_idx': 29, 'before_paragraph_idx': 28}, {'section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'after_section': None, 'context_after': 'x′∈Dy x′∈Dy In information theory, minimizing mutual information under bounded distortion constraints is defined as H( ˆY |X = x) := − ', 'paragraph_idx': 32, 'before_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_before': 'f (x′)), ', 'modified_lines': 'where KL is the Kullback-Leibler divergence. To minimize it, we fix mean modify f (x). Let p ∈ ∆Y be the modified version of f (x) and our objective is KL(p|| mean f (x′) for simplicity and f (x′)). Additionally, we constrain ∥p − f (x)∥1 ≤ ε to preserve the model’s utility, where ε > 0 is the distortion controller. known as the rate-distortion problem (Shannon, 1959), which is for signal compression. If a signal has less information, it is easier to compress, and a stricter distortion bound can be applied. Inspired by his work, we introduce Shannon entropy to quantify the information in ˆY when X = x, which is ', 'original_lines': 'where KL is Kullback-Leibler divergence, a binary convex function. To minimize it, we fix f (x′) for simplicity and modify f (x). Let p ∈ ∆Y be the modified version of f (x) and mean f (x′)). Additionally, we constrain ∥p − f (x)∥1 ≤ ε to preserve the our objective is KL(p|| mean model’s utility, where ε > 0 is the distortion controller. known as the rate-distortion problem (Shannon, 1959) for signal compression. If a signal has less information, it is easier to compress, and a stricter distortion bound can be applied. Inspired by their work, we introduce Shannon entropy to quantify the information in ˆY when X = x, which is ', 'after_paragraph_idx': None, 'before_paragraph_idx': 32}, {'section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'after_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_after': 'time (Algorithm 2). Without sampling, we have to consider all y, ˆy ∈ Y. The time complexity is Ω(|Y|2), 5 ', 'paragraph_idx': 35, 'before_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_before': 'P(ˆy|x) log P(ˆy|x). ˆy∈Y ', 'modified_lines': 'Our constraint becomes ∥p − f (x)∥1 ≤ ε · H( ˆY |X = x), where the distortion bound is proportional to the amount of information. It reduces the modification when the information is limited, and en- hances the compression when the information is abundant. We refer to it as adaptive rate-distortion. 2After sampling, we only need to consider one y ∈ Y and all ˆy ∈ Y, so we can solve within O(|Y| log |Y|) which is unacceptable when Y is large. ', 'original_lines': 'Our new constraint is ∥p − f (x)∥1 ≤ ε · H( ˆY |X = x), where the distortion bound is proportional to the amount of information. It reduces the modification when the information is limited, and enhances the compression when the information is abundant. We refer to it as adaptive rate-distortion. 2After sampling, we only need to consider one y ∈ Y and all ˆy ∈ Y, so we can solve it in O(|Y| log |Y|) which is unacceptable when |Y| is large. ', 'after_paragraph_idx': 36, 'before_paragraph_idx': 34}, {'section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'after_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_after': 'Sample y ∈ Y with the probability of softmax( f (x) qy ← mean x′∈Dy H ← − (cid:80) f (x′); T ); min KL(p||qy), s.t. ∥p − f (x)∥1 ≤ ε · H, ', 'paragraph_idx': 37, 'before_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_before': 'Algorithm 1: post-processing to minimize CMI. Input: original output f (x), temperature T , distortion controller ε, validation set D. ', 'modified_lines': 'Output: modified output p∗. ˆy∈Y fˆy(x) log fˆy(x); Solve the convex optimization problem and return the optimal solution: ', 'original_lines': 'Output: modified output p. Solve the convex optimization problem and return the optimal p: ˆy∈Y fˆy(x) log fˆy(x); ', 'after_paragraph_idx': 37, 'before_paragraph_idx': 37}, {'section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'after_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_after': '(5) is a convex optimization problem that can be solved by existing optimizers. Furthermore, we 5.1 SETTINGS resized to 64 × 64. 6 Published as a conference paper at ICLR 2025 Table 1: MIA resistance of various defenses under soft-label attacks. Mirror C2FMI None MID LS TL SSD None MID LS TL SSD None MID LS TL SSD 0.96 0.90 0.99 1.03 2 5 ', 'paragraph_idx': 37, 'before_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_before': '(5) ', 'modified_lines': 'Our defense is summarized as Algorithm 1. Without the need to retrain the model, it is plug-and-play and easy to deploy. Note that D′ := {qy : y ∈ Y} can be calculated and stored in advance. If the model owner and the defender are not the same, the owner only needs to provide D′ instead of D, avoiding communication costs and privacy risks. derive the explicit solution in Appendix B, calculate it within O(|Y| log |Y|) time in Algorithm 2, accelerate it via GPUs in Algorithm 3, and evaluate the computation cost in Appendix C. 5 EXPERIMENTS Datasets. We select CelebA (Liu et al., 2015) and FaceScrub (Ng & Winkler, 2014) as private datasets, and FFHQ (Karras et al., 2018) as public dataset. CelebA has 10,177 labels and we only take 1,000 labels with the most images. FaceScrub has 530 labels and 106,863 images, but we only take 43,147 images because the other URLs are broken. FFHQ has 70,000 unlabeled images. All images are cropped and resized to 64 × 64. We use 80% of the private data for training, 10% for validation, and 10% for testing. Models. We select IR-152 (He et al., 2015) and VGG-16 (Simonyan & Zisserman, 2014) as target models, and MaxViT (Tu et al., 2022) as evaluation model. IR-152 and MaxViT are pre-trained on MS-Celeb-1M (Guo et al., 2016), and VGG-16 is pre-trained on ImageNet (Deng et al., 2009). They are fine-tuned for 100 epochs on the training set, and we take the version with the highest accuracy on the validation set. The evaluation models achieve 97.3% test accuracy on CelebA and 99.3% on FaceScrub. Attacks. We select Mirror (An et al., 2022) and C2FMI (Ye et al., 2024b) as soft-label attackers, and BREP (Kahla et al., 2022) and LOKT (Nguyen et al., 2023) as hard-label attackers. They attack the first 100 private labels and reconstruct 5 images for each label. For BREP and LOKT, we train GANs and surrogate models on FFHQ. For Mirror and C2FMI, we use the 256 × 256 StyleGAN2 (Karras et al., 2019) trained on FFHQ. The generated images are center-cropped to 176 × 176 and Defenses. We select MID (Wang et al., 2020), BiDO (Peng et al., 2022), LS (Struppek et al., 2023) and TL (Ho et al., 2024) as competitors. They need to retrain the target model, whereas we only post-process the outputs from the undefended model. For fair comparison, we carefully tune their hyper-parameters to achieve similar accuracies on the validation set. All hyper-parameters are in Appendix D. To evaluate the MIA resistance and model’s utility, we consider the following metrics: ↓ Acc1 ↓ Acc5 ↑ δeval 625 18.0% 31.0% 629 17.8% 31.6% BiDO 10.6% 25.8% 614 660 9.0% 15.2% 13.6% 28.6% 633 3.2% 8.2% 728 496 55.2% 76.8% 534 38.0% 61.0% BiDO 34.4% 60.4% 526 503 45.8% 73.4% 39.2% 63.2% 535 32.2% 46.4% 604 29.0% 51.8% 544 34.6% 64.8% 520 BiDO 20.0% 44.4% 556 30.2% 56.4% 531 558 17.8% 41.8% 16.4% 40.0% 604 ↑ δf ace 1.22 1.22 1.16 1.36 1.24 1.46 0.89 0.93 1.15 1.00 0.94 0.97 1.03 1.13 ', 'original_lines': 'Our defense is implemented by Algorithm 1. Without the need to retrain the model, it is plug-and- play and easy to deploy. Note that the qy, y ∈ Y can be calculated and stored in advance to reduce computation cost. If the model owner differs from the defender, the owner only needs to provide the defender with qy instead of D, avoiding communication cost and privacy risks. derive the explicit solution in Appendix B, calculate it by Water-Filling in Algorithm 2, accelerate it by GPUs in Algorithm 3, and evaluate the computation cost in Appendix C. 5 EXPERIMENT Datasets. Following the previous work, we use CelebA (Liu et al., 2014) and FaceScrub (Ng & Winkler, 2014) as private datasets. CelebA contains 10,177 labels and we only take 1000 labels with the most images (Kahla et al., 2022). FaceScrub contains 530 labels and 43,147 images.3 All images are cropped and resized to 64 × 64 pixels. We use 80% of the data for training, 10% for validation, and 10% for testing. The validation set is used to select the best trained models, training hyperparameters, and defense hyperparameters. Models. VGG-16 (Simonyan & Zisserman, 2014) and IR-152 (He et al., 2015) are selected as target models. They are trained with various defenses. The evaluation model is a FaceNet (Cheng et al., 2017). Attacks. We focus on state-of-the-art black-box MIAs, including BREP (Kahla et al., 2022), Mirror (An et al., 2022), C2FMI (Ye et al., 2024b), LOKT (Nguyen et al., 2023) and RLBMI (Han et al., 2023). We attack the first 100 labels in the private dataset, reconstructing 5 images for each label. For BREP and LOKT, we use the FFHQ (Karras et al., 2019) to train GANs and surrogate models under official settings. For Mirror and C2FMI, we adopt the 256 × 256 GAN trained on FFHQ provided by (Karras et al., 2019). The generated images are center-cropped to 176 × 176 and then Metrics. To evaluate the MIA resistance and model’s utility, we consider the following metrics: • Attack Accuracy. The metric is used to imitate a human to determine whether recon- structed images correspond to the target identity or not. Specifically, we employ an evalu- ation model trained on the same dataset as the target model to re-classify the reconstructed images. We compute the top-1 and top-5 classification accuracies, denoted as “acc” and “acc5”, respectively. 3The original FaceScrub contains 106,863 images, but some images are unavailable because their URLs are invalid. • Feature Distance. The feature is extracted from the second-to-last layer of the model. This distance metric measures the average l2 distance between the features of reconstructed im- ages and the nearest private images. Consistent with previous research, we use both the evaluation model and a pre-trained FaceNet (Schroff et al., 2015) to generate the features. The corresponding feature distances are denoted as σeval and σf ace. A lower feature dis- tance indicates a closer semantic similarity between the reconstructed images and private samples. • Test Accuracy. The top-1 classification accuracy on the private test set. This metric is used to evaluate the utility of the target model with defense. • Distortion. This metric is used to quantify the modification to the predicted probability vectors by defenses. We take the L1 distance between the outputs with and without defense. It is denoted as “dist”. All experiments are conducted by MIBench (Qiu et al., 2024b). 5.2 COMPARISON WITH STATE-OF-THE-ART DEFENSES ↓ acc ↓ acc5 ↑ σeval 10.0% 18.8% 2526 9.0% 17.6% 2448 4.8% 11.4% 2758 3.2% 7.8% 2602 6.6% 14.4% 2613 1.2% 3.0% 2527 BiDO 39.6% 63.2% 2135 40.0% 61.2% 2152 BiDO 31.0% 55.6% 2168 28.8% 56.8% 2286 31.2% 51.6% 2175 22.8% 35.8% 2753 BiDO 9.2% 24.8% 2740 17.4% 38.0% 2518 5.2% 17.2% 2911 13.8% 30.4% 2557 7.0% 18.6% 2777 5.0% 13.8% 2970 ↑ σf ace 1.31 1.23 1.17 1.33 1.27 1.56 0.88 0.92 0.98 1.18 1.02 0.95 1.06 1.06 1.17 ↓ acc ↓ acc5 ↑ σeval 3.6% 8.0% 2521 0.2% 0.4% 2382 3.8% 2598 0.8% 4.2% 2536 1.4% 7.0% 2528 2.6% 0% 0.4% 2377 17.6% 41.2% 2196 3.2% 7.6% 3055 12.2% 25.6% 2528 11.0% 30.4% 2390 7.4% 21.0% 2341 3.2% 7.2% 3107 5.8% 13.2% 2907 2.0% 7.0% 2986 5.0% 14.8% 2625 7.0% 20.4% 2662 7.6% 19.8% 2565 0.8% 5.0% 3223 ↑ σf ace 1.36 1.56 1.31 1.39 1.37 1.67 1.36 1.14 1.07 1.24 1.38 1.14 1.22 1.11 1.08 1.11 1.37 ', 'after_paragraph_idx': 37, 'before_paragraph_idx': 37}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Lukas Struppek, Dominik Hintersdorf, and Kristian Kersting. Be careful what you smooth for: Label smoothing can be a privacy shield but also a catalyst for model inversion attacks. International Conference on Learning Representations (ICLR), 2023. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Kersting. Plug & play attacks: Towards robust and flexible model inversion attacks. International Conference on Machine Learning (ICML), 2022. ', 'modified_lines': '', 'original_lines': '11 Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-04-02 11:47:16
ICLR.cc/2025/Conference
lfxUG2QmBw
1r3xXWb3iA
[{'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'MIAs, where attackers can only query the model and obtain outputs, are closer to real-world scenarios. The latest black-box attacks have outperformed the state-of-the-art white-box attacks, and existing defenses cannot resist them ef- fectively. To fill this gap, we propose Stealthy Shield Defense (SSD), a post- processing algorithm against black-box MIAs. Our idea is to modify the model’s outputs to minimize the conditional mutual information (CMI). We mathemat- we formulate a convex optimization problem and solve it via the water-filling https://github.com/ZhuangQu/Stealthy-Shield-Defense. 1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ABSTRACT Model inversion attacks (MIAs) aim to reconstruct the private training data by ', 'modified_lines': 'accessing the public model, raising concerns about privacy leakage. Black-box ically prove that CMI is a special case of Information Bottleneck (IB), and thus inherits the benefits of IB—making predictions less dependent on inputs and more dependent on ground truths. This theoretically guarantees our ef- fectiveness, both in resisting MIAs and preserving utility. To minimize CMI, method. Without the need to retrain the model, our defense is plug-and-play and easy to deploy. Experimental results indicate that SSD outperforms exist- ing defenses, in terms of MIA resistance and model’s utility, across various attack algorithms, private datasets, and model architectures. Our code is available at ', 'original_lines': 'accessing a public model, raising concerns about privacy leakage. Black-box ically prove that CMI is a special case of information bottlenecks (IB), and thus inherits the advantages of IB—making predictions less dependent on inputs and more dependent on ground truths. This theoretically guarantees our effec- tiveness, both in resisting MIAs and preserving utility. For minimizing CMI, method. Adaptive rate-distortion is introduced to constrain the modification to the outputs, and the water-filling is implemented on GPUs to address computa- tion cost. Without the need to retrain the model, our algorithm is plug-and-play and easy to deploy. Experimental results indicate that SSD outperforms existing defenses, in terms of MIA resistance and model’s utility, across various attack algorithms, training datasets, and model architectures. Our code is available at ', 'after_paragraph_idx': 2, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': '1 INTRODUCTION', 'context_after': 'posing the greatest risk (Qiu et al., 2024b). For instance, consider a face recognition access control system with a publicly accessible interface. Through carefully crafted malicious queries, model in- version attackers can infer the sensitive facial images stored in the system, along with the associated ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'data has raised concerns about privacy breaches. Recent studies (Fang et al., 2024b;a; 2025) have explored various attack methods to probe these privacy, such as gradient inversion (Fang et al., 2023; Yu et al., 2024b) and membership inference (Hu et al., 2021). Among the emergent threats, model ', 'modified_lines': 'inversion attacks (MIAs) aim to reconstruct the private training data by accessing the public model, ', 'original_lines': 'inversion attacks (MIAs) aim to reconstruct the private training data by accessing a public model, ', 'after_paragraph_idx': 3, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'The latest soft-label attack RLBMI (Han et al., 2023) and hard-label attack LOKT (Nguyen et al., 2023) have outperformed the state-of-the-art white-box attacks. (3) Existing defenses cannot resist black-box attacks effectively. Existing defenses focus on modifying the weights and structure of the model, but black-box attackers only exploit the outputs, and thus are less susceptible. To address these concerns, we propose Stealthy Shield Defense (SSD), a post-processing algorithm against black-box MIAs. As shown in Figure 1, the idea of SSD is to modify the model’s outputs The contributions of this paper are: ', 'paragraph_idx': 4, 'before_section': '1 INTRODUCTION', 'context_before': 'the details of the model, whereas black-box attackers can only query the model and obtain outputs. Black-box MIAs become more threatening than white-box because: (1) Black-box scenarios are more common. As models grow larger nowadays, they are mostly stored on servers and can only be ', 'modified_lines': 'accessed online, which is a typical black-box scenario. (2) Black-box attacks are more powerful. 1 Published as a conference paper at ICLR 2025 Figure 1: An overview of Stealthy Shield Defense. With 3 classes, the probability simplex is a triangle. CMI is defined as I(X; ˆY |Y ). According to our Theorem 1, minimizing CMI makes the mutual information I(X; ˆY ) minimized and I( ˆY ; Y ) maximized. As shown by Yang et al. (2024), minimizing CMI makes the outputs more concentrated class-wisely. to minimize the conditional mutual information (CMI). CMI quantifies the dependence between inputs and predictions when ground truths are given. In Theorem 1, we prove that CMI is a special case of Information Bottleneck (IB), and thus inherits the benefits of IB—making predictions less dependent on inputs and more dependent on ground truths. Under this theoretical guarantee, SSD achieves a better trade-off between MIA resistance and model’s utility. Without the need to retrain the model, SSD is plug-and-play and easy to deploy. ', 'original_lines': 'accessed online, which are typical black-box scenarios. (2) Black-box attacks are more powerful. 1 Published as a conference paper at ICLR 2025 Figure 1: An overview of Stealthy Shield Defense. The probability simplex is a triangle when the number of classes is three. CMI is defined as I(X; ˆY |Y ). According to our Theorem 1, minimizing CMI makes the mutual information I(X; ˆY ) minimized and I( ˆY ; Y ) maximized. As shown by Yang et al. (2024), minimizing CMI makes the outputs more concentrated class-wisely. to minimize the conditional mutual information (CMI) (Yang et al., 2024). CMI quantifies the dependence between inputs and predictions when ground truths are given. In Theorem 1, we prove that CMI is a special case of information bottlenecks (IB), and thus inherits the advantages of IB— making predictions less dependent on inputs and more dependent on ground truths. Under this theoretical guarantee, SSD achieves a better trade-off between MIA resistance and model’s utility. Without the need to retrain the model, SSD is plug-and-play and easy to deploy. ', 'after_paragraph_idx': 4, 'before_paragraph_idx': 4}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'algorithm by GPU-based water-filling method as well. • Our experiments indicate that we outperform all competitors, in terms of MIA resistance 2 RELATED WORKS ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': 'its effectiveness. • We propose a post-processing algorithm to minimize CMI without retraining models. In ', 'modified_lines': 'our algorithm, temperature is introduced to calibrate the sampling probability and adaptive rate-distortion is introduced to constrain the modification to outputs. We speed up our and model’s utility, exhibiting good generalizability across various attack algorithms, pri- vate datasets, and model architectures. ', 'original_lines': 'our algorithm, temperature is introduced to calibrate the probabilities and adaptive rate- distortion is introduced to constrain the modification to the outputs. We speed up our and model’s utility, exhibiting good generalizability across various attack algorithms, train- ing datasets, and model architectures. ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 6}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'trains multiple surrogate models and applies white-box attacks to them. To address the threat of MIAs, a variety of defenses have been proposed. MID (Wang et al., 2020), BiDO (Peng et al., 2022), and LS (Struppek et al., 2023) change the training losses, TL (Ho et al., defense against black-box MIAs is still limited. 2 Inputs 𝑿Target ModelDistributions of 𝒀with High CMIPost-ProcessingAlgorithmDistributions of 𝒀with Low CMIGroundTruthLabels 𝒀⋯AliceBobChris𝑰𝑿;𝒀↓⋯𝑰𝒀;𝒀↑ Published as a conference paper at ICLR 2025 2.2 ', 'paragraph_idx': 3, 'before_section': None, 'context_before': 'Yuan et al., 2023; Qiu et al., 2024a) and black-box. We focus on black-box MIAs, where attackers can only query the model and obtain outputs. In this scenario, BREP (Kahla et al., 2022) utilizes zero-order optimization to drive the latent vectors away from the decision boundary. Mirror (An ', 'modified_lines': 'et al., 2022) and C2FMI (Ye et al., 2023) explore genetic algorithms. LOKT (Nguyen et al., 2023) 2024) freezes some layers of the model, and CA-FaCe (Yu et al., 2024a) changes the structure of the model. However, black-box attackers only exploit the outputs, and thus are rarely hindered. The In this paper, we propose a novel black-box defense based on post-processing, without retraining the model. Experimental results indicate that we outperform these existing defenses. ', 'original_lines': 'et al., 2022) and C2F (Ye et al., 2024b) explore genetic algorithms. LOKT (Nguyen et al., 2023) 2024) freezes some layers of the model, and CA-FaCe (Yu et al., 2024a) change the structure of the model. However, black-box attackers only exploit the outputs, and thus are rarely hindered. The In this paper, we propose a novel black-box defense based on post-processing, without retraining the model. Experimental results indicate that we outperform the existing defenses. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Mutual Information-based Defense (MID). Yang et al. (2024) proposed to use conditional mutual information (CMI) as a performance metric for DNNs, providing the calculation formula and geometric interpretation of CMI. By minimizing CMI, they improve classifiers (Yang et al., 2025) and address class imbalance (Hamidi et al., 2024). By (Yang & Ye, 2024). 3 PRELIMINARY 3.1 NOTATIONS ˆY ∈ Y be the label predicted by f , and Z ∈ Z be the intermediate representation in f . Note that Y → X → Z → ˆY is a Markov chain. Let P be the probability function and P(x) := P{X = x}, P(y) := P{Y = y}, P(x, ˆy|y) := P{X = x, ˆY = ˆy | Y = y}, etc. 3.2 MODEL INVERSION ATTACKS ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'model should compress the redundant information in inputs while preserving the useful information for tasks. They later highlighted that information is compressed layer-by-layer in DNNs (Tishby & Zaslavsky, 2015; Shwartz-Ziv & Tishby, 2017). Alemi et al. (2017) proposed Variational Infor- ', 'modified_lines': 'mation Bottleneck (VIB) to estimate the bounds of IB, and Wang et al. (2020) applied VIB to their maximizing CMI, they improve knowledge distillation (Ye et al., 2024) and address nasty teachers In this paper, we theoretically prove that CMI is a special case of IB and thus inherits the benefits of IB. Furthermore, we propose a novel model inversion defense based on CMI. Let f : X → Y be a neural classifier, X ∈ X be an input, Y ∈ Y be the ground truth label, Let ∆Y be the probability simplex over Y, f (x) ∈ ∆Y be the output from the softmax layer when x is input to f , and fˆy(x) ∈ (0, 1) be the ˆy-th component of f (x), ˆy ∈ Y. ', 'original_lines': 'mation Bottleneck (VIB) to estimate the bounds of IB, and Wang et al. (2020) applied VIB in their maximizing CMI, they improve knowledge distillation (Ye et al., 2024a) and address nasty teachers In this paper, we theoretically prove that CMI is a special case of IB and thus inherits the advantages of IB. Furthermore, we propose a novel model inversion defense based on CMI. Let f : X → Y be a neural classifier, X ∈ X be an input to f , Y ∈ Y be the ground truth label, Let ∆Y be the probability simplex over Y. Let f (x) ∈ ∆Y be the output from the softmax layer of f when x is input to f , and fˆy(x) ∈ (0, 1) be the ˆy-th component of f (x), ˆy ∈ Y. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.3 MUTUAL INFORMATION-BASED DEFENSE (MID)', 'after_section': '3.3 MUTUAL INFORMATION-BASED DEFENSE (MID)', 'context_after': 'I(X; Z) − β · I(Z; Y ) (2) where β > 0. They used (2) as a regularizer to train f , minimizing I(X; Z) to resist MIAs while maximizing I(Z; Y ) to preserve the model’s utility. 4 METHODOLOGY 4.1 CONDITIONAL MUTUAL INFORMATION-BASED DEFENSE We aim to resist black-box MIAs where attackers cannot access Z, so we still minimize I(X; ˆY ) I(X; ˆY |Y = y) := ', 'paragraph_idx': 19, 'before_section': '3.3 MUTUAL INFORMATION-BASED DEFENSE (MID)', 'context_before': 'I(X; ˆY ) hurts the model’s utility. Especially, I(X; ˆY ) = 0 iff X and ˆY are independent, in which case f is immune to any attack but useless at all. ', 'modified_lines': 'As an alternative, they introduced the Information Bottleneck (Tishby & Zaslavsky, 2015), which is defined as 1Some literature refers to hard-label as label-only, and soft-label as black-box. 3 Published as a conference paper at ICLR 2025 instead of I(X; Z). Furthermore, we observe that all MIA algorithms target one fixed label during attacking. Formally, let Dy := {x ∈ X : (x, y) ∈ D} be the sub-dataset labeled with the ground truth y. When y is given, all attackers aim to reconstruct ˆDy as close to Dy as possible. Against their intention, we propose to minimize ', 'original_lines': '1Some literature refers to hard-label as label-only, and soft-label as black-box. 3 Published as a conference paper at ICLR 2025 As an alternative, they introduced the information bottleneck (IB), which is defined as instead of I(X; Z). Furthermore, we observe that all MIA algorithms target one fixed label during attacking. Formally, let Dy := {x ∈ X : (x, y) ∈ D} be the sub-dataset with the ground truth label y. When y is given, all attackers aim to reconstruct ˆDy as close to Dy as possible. Against their intention, we propose to minimize ', 'after_paragraph_idx': 19, 'before_paragraph_idx': 18}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'I(X; ˆY |Y ) := ', 'paragraph_idx': 6, 'before_section': None, 'context_before': '(3) ', 'modified_lines': 'I(X; ˆY |Y = y) quantifies the dependence between X and ˆY when Y = y. We minimize (3) to prevent attackers from inferring the information about Dy. To protect the complete D, we minimize (3) for each y ∈ Y with the weight of P(y). This is equivalent to minimizing the conditional mutual information (CMI), which is defined as ', 'original_lines': 'I(X; ˆY |Y = y) quantifies the dependence between X and ˆY when Y = y. We minimize it to prevent attackers from inferring the information about Dy. To protect the complete D, we minimize (3) for each y ∈ Y with the weight of P(y). It is equivalent to minimizing the conditional mutual information (CMI), which is defined as ', 'after_paragraph_idx': 6, 'before_paragraph_idx': None}, {'section': '4.1 CONDITIONAL MUTUAL INFORMATION-BASED DEFENSE', 'after_section': '4.1 CONDITIONAL MUTUAL INFORMATION-BASED DEFENSE', 'context_after': 'I(X; ˆY |Y ) = I(X; ˆY ) − I( ˆY ; Y ). • Minimize I(X; ˆY ) to compress the redundant information in inputs, and decrease the de- Wang et al. (2020). • Maximize I( ˆY ; Y ) to preserve the useful information for tasks, and increase the depen- 4.2 MINIMIZE CMI VIA POST-PROCESSING I(X; ˆY |Y ) = P(y) (cid:88) y∈Y x∈X ', 'paragraph_idx': 21, 'before_section': '4.1 CONDITIONAL MUTUAL INFORMATION-BASED DEFENSE', 'context_before': '(4) ', 'modified_lines': 'Theorem 1. CMI is a special case of Information Bottleneck (2) taking Z = ˆY and β = 1, i.e. Our proof is in Appendix A. Our theorem proves that CMI inherits the benefits of Information Bottleneck (IB), i.e., minimizing CMI has two effects: pendence between inputs and predictions. This improves the MIA resistance as shown by dence between predictions and ground truths. This improves the model’s utility obviously. I(X; Z) in IB is challenging to calculate because X and Z are both high-dimensional. Wang et al. (2020) could only approximate IB by variational bounds (Alemi et al., 2017). Fortunately, as a special case of IB, CMI can be calculated directly (Yang et al., 2024). Previous works minimized CMI during training (Yang et al., 2024; Hamidi et al., 2024; Yang et al., 2025). In contrast to them, we propose a training-free method to minimize CMI. We have (cid:88) (cid:88) ', 'original_lines': 'Theorem 1. CMI is a special case of information bottlenecks (2) when Z = ˆY and β = 1, i.e. Our proof is provided in Appendix A. Our theorem proves that CMI inherits the benefits of IB in two aspects: pendence between inputs and predictions. It improves the MIA resistance as shown by dence between predictions and ground truths. It improves the model’s utility obviously. The I(X; Z) in IB is challenging to calculate because the input space X and representation space Z are both high-dimensional. Wang et al. (2020) could only approximate IB by variational bounds (Alemi et al., 2017). Fortunately, as a special case of IB, CMI can be calculated directly (Yang et al., 2024). Previous works used CMI as a regularizer and minimized it during training (Yang et al., 2024; Hamidi et al., 2024; Yang et al., 2025). In contrast to them, we minimize CMI via post-processing. 4 Published as a conference paper at ICLR 2025 We transform CMI as follows: (cid:88) (cid:88) ', 'after_paragraph_idx': 21, 'before_paragraph_idx': 21}, {'section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'after_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_after': '(cid:80) ˆy∈Y P(ˆy|x) log P(ˆy|x) demonstrated its inaccuracy in modern neural classifiers. Inspired by their work, we introduce the To get P(ˆy|y), we have ', 'paragraph_idx': 25, 'before_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_before': 'ˆy∈Y P(ˆy|x) log P(ˆy|x) P(ˆy|y) for each x input to f . For simplicity, we sample y ∈ Y with the probability of P(y|x) and minimize ', 'modified_lines': ' y∈Y P(y|x) (cid:80) 4 Published as a conference paper at ICLR 2025 P(ˆy|y) instead,2 which equals the original objective in terms of expectation. Next we need to calculate P(ˆy|x), P(y|x) and P(ˆy|y). To get P(ˆy|x), we have P(ˆy|x) = fˆy(x) by design of neural classifiers. To get P(y|x), an intuitive idea is that P(y|x) = P(ˆy|x) for y = ˆy, but Guo et al. (2017) had temperature to calibrate it. ', 'original_lines': 'It is equal to the original objective in terms of mathematical y∈Y P(y|x) (cid:80) P(ˆy|y) instead.2 expectation. Next we manage to calculate P(ˆy|x), P(y|x) and P(ˆy|y). To get P(ˆy|x), we have P(ˆy|x) = fˆy(x) by the design of neural classifiers. To get P(y|x), an intuitive idea is that P(y|x) = P(ˆy|x) for y = ˆy. But Guo et al. (2017) have temperature mechanism to adjust it. ', 'after_paragraph_idx': 26, 'before_paragraph_idx': 25}, {'section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'after_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_after': 'P(ˆy|x) log ', 'paragraph_idx': 29, 'before_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_before': 'fˆy(x′), ', 'modified_lines': 'where ≈ is by that the samples in Dy are i.i.d. to P(x|y), and thus the sample mean can estimate the conditional expectation. In practice we use the validation set as Dy, because the training samples are overfitted by f and may cause inaccurate estimation. Now our objective becomes ', 'original_lines': 'where the ≈ is by the fact that the samples in Dy are i.i.d. to P(x|y), and thus the sample mean can estimate the conditional expectation. In practice we use the validation set as Dy, because the training samples are overfitted by f causing inaccurate estimation. Now the objective becomes ', 'after_paragraph_idx': 30, 'before_paragraph_idx': 29}, {'section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'after_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_after': 'known as the rate-distortion problem (Shannon, 1959), which is for signal compression. If a signal has less information, it is easier to compress, and a stricter distortion bound can be applied. Inspired by his work, we introduce Shannon entropy to quantify the information in ˆY when X = x, which is ', 'paragraph_idx': 32, 'before_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_before': 'Additionally, we constrain ∥p − f (x)∥1 ≤ ε to preserve the model’s utility, where ε > 0 is the distortion controller. ', 'modified_lines': 'In information theory, minimizing the mutual information under bounded distortion constraints is ', 'original_lines': 'In information theory, minimizing mutual information under bounded distortion constraints is ', 'after_paragraph_idx': 32, 'before_paragraph_idx': 31}, {'section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'after_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_after': 'Output: modified output p∗. Sample y ∈ Y with the probability of softmax( f (x) qy ← mean ', 'paragraph_idx': 36, 'before_section': None, 'context_before': 'Published as a conference paper at ICLR 2025 Algorithm 1: post-processing to minimize CMI. ', 'modified_lines': 'Input: original output f (x), validation set D, temperature T , distortion controller ε. ', 'original_lines': 'Input: original output f (x), temperature T , distortion controller ε, validation set D. ', 'after_paragraph_idx': 36, 'before_paragraph_idx': None}, {'section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'after_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_after': 'min KL(p||qy), s.t. ∥p − f (x)∥1 ≤ ε · H, p ∈ ∆Y. 5 EXPERIMENTS 5.1 SETTINGS take 1,000 labels with the most images. FaceScrub has 530 labels and 106,863 images, but we only take 43,147 images because the other URLs are broken. FFHQ has 70,000 unlabeled images. All validation, and 10% for testing. To evaluate the MIA resistance and model’s utility, we consider the following metrics: 6 Published as a conference paper at ICLR 2025 Table 1: MIA resistance of various defenses under soft-label attacks. Mirror C2FMI ↓ Acc5 ', 'paragraph_idx': 36, 'before_section': '4.2 MINIMIZE CMI VIA POST-PROCESSING', 'context_before': 'T ); ', 'modified_lines': 'Solve the convex optimization problem: (6) return the optimal solution p∗; Datasets. We select CelebA (Liu et al., 2015) and FaceScrub (Ng & Winkler, 2014) as the private datasets, and FFHQ (Karras et al., 2018) as the public dataset. CelebA has 10,177 labels and we images are cropped and resized to 64 × 64. We use 80% of the private samples for training, 10% for Models. We select IR-152 (He et al., 2015) and VGG-16 (Simonyan & Zisserman, 2014) as the target models, and MaxViT (Tu et al., 2022) as the evaluation models. IR-152 and MaxViT are pre- trained on MS-Celeb-1M (Guo et al., 2016), and VGG-16 is pre-trained on ImageNet (Deng et al., 2009). They are fine-tuned on the training set, and we select the best version by the validation set. The evaluation model on CelebA achieves 97.3% test accuracy, and the one on FaceScrub achieves 99.3%. Attacks. We select Mirror (An et al., 2022), C2FMI (Ye et al., 2023) and RLBMI (Han et al., 2023) as the soft-label attackers, and BREP (Kahla et al., 2022) and LOKT (Nguyen et al., 2023) as the hard-label attackers. They attack the first 100 private labels and reconstruct 5 images per label. For RLBMI, BREP and LOKT, we train GANs and surrogate models on FFHQ. For Mirror and C2FMI, we use the 256 × 256 StyleGAN2 (Karras et al., 2019) trained on FFHQ, whose generated images are center-cropped to 176 × 176 and resized to 64 × 64. Defenses. We select MID (Wang et al., 2020), BiDO (Peng et al., 2022), LS (Struppek et al., 2023), TL (Ho et al., 2024) and Purifier (Yang et al., 2023) as the competitors. Purifier trains a CVAE, and the others retrain the target models. We carefully tune their hyper-parameters to achieve the similar validation accuracies. All hyper-parameters of defenses are in Appendix D. Attack Accuracy Let the evaluation model reclassify the reconstructed images. The top-1 and top-5 accuracies are denoted as Acc1 and Acc5 respectively. Lower percentages indicate better MIA resistance. Feature Distance The image features are extracted from the penultimate layer of a model. We take the average L2 feature distance between the reconstructed images and the nearest training images. The features are extracted by the evaluation model and a FaceNet (Schroff et al., 2015) trained on VGGFace2 (Cao et al., 2017), denoted as δeval and δf ace respectively. Higher distances indicate better MIA resistance. Test Accuracy The accuracy of the target model on the test set, denoted as Acc. Higher percentage indicates better utility. Distortion The L1 distance between the outputs with and without defense. We take the average on the test set, denoted as Dist. Lower distance indicates better utility. All experiments are conducted on MIBench (Qiu et al., 2024b). The experiments about RLBMI, Purifier and high resolution are in Appendix E-G. 2 5 1 - R I A b e l e C 2 5 1 - R I b u r c S e c a F 6 1 - G G V b u r c S e c a F 2 5 1 - R I A b e l e C 2 5 1 - R I b u r c S e c a F 6 1 - G G V b u r c S e c a F ', 'original_lines': 'Solve the convex optimization problem and return the optimal solution: (5) Our defense is summarized as Algorithm 1. Without the need to retrain the model, it is plug-and-play and easy to deploy. Note that D′ := {qy : y ∈ Y} can be calculated and stored in advance. If the model owner and the defender are not the same, the owner only needs to provide D′ instead of D, avoiding communication costs and privacy risks. (5) is a convex optimization problem that can be solved by existing optimizers. Furthermore, we derive the explicit solution in Appendix B, calculate it within O(|Y| log |Y|) time in Algorithm 2, accelerate it via GPUs in Algorithm 3, and evaluate the computation cost in Appendix C. Datasets. We select CelebA (Liu et al., 2015) and FaceScrub (Ng & Winkler, 2014) as private datasets, and FFHQ (Karras et al., 2018) as public dataset. CelebA has 10,177 labels and we only images are cropped and resized to 64 × 64. We use 80% of the private data for training, 10% for Models. We select IR-152 (He et al., 2015) and VGG-16 (Simonyan & Zisserman, 2014) as target models, and MaxViT (Tu et al., 2022) as evaluation model. IR-152 and MaxViT are pre-trained on MS-Celeb-1M (Guo et al., 2016), and VGG-16 is pre-trained on ImageNet (Deng et al., 2009). They are fine-tuned for 100 epochs on the training set, and we take the version with the highest accuracy on the validation set. The evaluation models achieve 97.3% test accuracy on CelebA and 99.3% on FaceScrub. Attacks. We select Mirror (An et al., 2022) and C2FMI (Ye et al., 2024b) as soft-label attackers, and BREP (Kahla et al., 2022) and LOKT (Nguyen et al., 2023) as hard-label attackers. They attack the first 100 private labels and reconstruct 5 images for each label. For BREP and LOKT, we train GANs and surrogate models on FFHQ. For Mirror and C2FMI, we use the 256 × 256 StyleGAN2 (Karras et al., 2019) trained on FFHQ. The generated images are center-cropped to 176 × 176 and resized to 64 × 64. Defenses. We select MID (Wang et al., 2020), BiDO (Peng et al., 2022), LS (Struppek et al., 2023) and TL (Ho et al., 2024) as competitors. They need to retrain the target model, whereas we only post-process the outputs from the undefended model. For fair comparison, we carefully tune their hyper-parameters to achieve similar accuracies on the validation set. All hyper-parameters are in Appendix D. ', 'after_paragraph_idx': 36, 'before_paragraph_idx': 36}, {'section': '5.1 SETTINGS', 'after_section': None, 'context_after': '496 MID 534 BiDO 34.4% 60.4% 526 503 39.2% 63.2% 32.2% 46.4% 604 LS ', 'paragraph_idx': 48, 'before_section': None, 'context_before': 'SSD None ', 'modified_lines': '55.2% 76.8% 38.0% 61.0% 45.8% 73.4% 535 ', 'original_lines': '55.2% 76.8% 38.0% 61.0% 45.8% 73.4% 535 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '↓ Acc1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0.97 1.03 1.13 ', 'modified_lines': '', 'original_lines': ' 2 5 1 - R I A b e l e C 2 5 1 - R I b u r c S e c a F 6 1 - G G V b u r c S e c a F ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.1 SETTINGS', 'after_section': None, 'context_after': '↓ Acc1 ', 'paragraph_idx': 37, 'before_section': None, 'context_before': '1.08 1.40 ', 'modified_lines': '7 ', 'original_lines': '2 5 1 - R I A b e l e C 2 5 1 - R I b u r c S e c a F 6 1 - G G V b u r c S e c a F ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Si Chen, Mostafa Kahla, R. Jia, and Guo-Jun Qi. Knowledge-enriched distributional model inversion attacks. International Conference on Computer Vision (ICCV), 2020. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Qiong Cao, Li Shen, Weidi Xie, Omkar M. Parkhi, and Andrew Zisserman. Vggface2: A dataset for recognising faces across pose and age. In Automatic Face & Gesture Recognition (FG), 2017. ', 'modified_lines': '', 'original_lines': '10 w/o T.0.010.040.070.100.130.160.19(a) Temperature T015.030.045.060.0Attack Accuracy%Acc10.801.001.201.401.60Face Distanceface0.10.20.30.40.50.60.70.8(c) Distortion Controller 015.030.045.060.0Attack Accuracy%Acc10.801.001.201.401.60Face Distancefaceface w/o Ada.w/o T.0.010.040.070.100.130.160.19(b) Temperature T96.597.097.598.098.5Test Accuracy%Acc00.040.080.12DistortionDist0.10.20.30.40.50.60.70.8(d) Distortion Controller 96.597.097.598.098.5Test Accuracy%Acc00.040.080.12DistortionDistDist w/o Ada. Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Ana- lyzing and improving the image quality of stylegan. Computer Vision and Pattern Recognition (CVPR), 2019. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'adversarial networks. In Computer Vision and Pattern Recognition (CVPR), 2018. ', 'modified_lines': '', 'original_lines': '11 Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'En-Hui Yang, Shayan Mohajer Hamidi, Linfeng Ye, Renhao Tan, and Beverly Yang. Conditional mutual information constrained deep learning: Framework and preliminary results. International Symposium on Information Theory (ISIT), 2024. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'undermining knowledge distillation fully distillable. European Conference on Computer Vision (ECCV), 2024. ', 'modified_lines': '', 'original_lines': '12 Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'where w∗ B satisfies (cid:80) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'i ∈ B, ', 'modified_lines': '', 'original_lines': '> ε 2 , (13) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Algorithm 3: GPU-based water-filling. Input: PyTorch tensors f , q of size |Y|. Output: PyTorch tensor p∗ of size |Y|. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '2 do ', 'modified_lines': '', 'original_lines': 'Restore the indices of fi, qi; return min(max(fi, w∗ Aqi), w∗ Bqi) for i ∈ Y; ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.1 SETTINGS', 'after_section': None, 'context_after': '17 Published as a conference paper at ICLR 2025 Table 8: Comprehensive results on high resolution. ', 'paragraph_idx': 40, 'before_section': None, 'context_before': '0.362 87.1% 0.191 ', 'modified_lines': 'Purifier (Yang et al., 2023) is a black-box defense against membership inference attacks, and perhaps resists model inversion attacks. Despite the lack of details about λ and kNN, we reproduce their work setting λ = k = 1. If the L2 distance between the input and the nearest training sample is less than 0.00005, then we swap the top-1 and top-2 labels. The validation set is used to train the CVAE. Aligned with Tables 1-3, the target model is the same IR-152 trained on CelebA. Table 7 shows that our SSD outperforms Purifier. G HIGH RESOLUTION We use HD-CelebA-Cropper3 to generate high resolution images, which are cropped and resized to 224 × 224. The IR-152 and MaxViT models are retrained on this new CelebA, and the test accuracy of MaxViT achieves 97.2%. We select Mirror as the attacker, equipped with the 1024 × 1024 StyleGAN2 trained on FFHQ. The generated images are center-cropped to 800 × 800 and resized to 224 × 224. Due to the huge computational cost for high resolution, we only attack the first 20 labels and reconstruct 5 images per label. Table 8 shows that our SSD still outperforms the other defenses. ', 'original_lines': 'Purifier (Yang et al., 2023) is a black-box defense against membership inference attacks and may also be effective against model inversion attacks. Despite the lack of details about λ and kNN, we reproduce their work setting λ = k = 1. If the L2 distance between the input and the nearest training sample is less than 0.00005, then we swap the top-1 and top-2 labels. The validation set is used to train the CVAE. Table 7 shows that we outperform Purifier. G EXPERIMENT ON HIGH RESOLUTION We use HD CelebA Cropper3 to generate high resolution CelebA, whose images are cropped and resized to 224 × 224. The target models IR-152 and evaluation model MaxViT are retrained on new CelebA. The hyper-parameters of defenses are in Table 8 and the test accuracy of MaxViT is 97.2%. We select Mirror as the attacker, using the 1024×1024 StyleGAN2 trained on FFHQ. The generated images are center-cropped to 800 × 800 and resized to 224 × 224. Since high resolution is computa- tionally expensive, we only attack the first 20 labels and reconstruct 5 images for each label. Table 8 shows that our SSD still outperforms the other defenses. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-05-18 11:20:33
ICLR.cc/2025/Conference
1r3xXWb3iA
OPrMtJz8xS
[{'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'state-of-the-art white-box attacks, and existing defenses cannot resist them ef- fectively. To fill this gap, we propose Stealthy Shield Defense (SSD), a post- processing algorithm against black-box MIAs. Our idea is to modify the model’s ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ABSTRACT Model inversion attacks (MIAs) aim to reconstruct the private training data by ', 'modified_lines': 'accessing the public model, raising concerns about privacy leakage. Black- box MIAs, where attackers can only query the model and obtain outputs, are closer to real-world scenarios. The latest black-box attacks have outperformed ', 'original_lines': 'accessing the public model, raising concerns about privacy leakage. Black-box MIAs, where attackers can only query the model and obtain outputs, are closer to real-world scenarios. The latest black-box attacks have outperformed the ', 'after_paragraph_idx': 2, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'Yu et al., 2024b) and membership inference (Hu et al., 2021). Among the emergent threats, model inversion attacks (MIAs) aim to reconstruct the private training data by accessing the public model, posing the greatest risk (Qiu et al., 2024b). For instance, consider a face recognition access control ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'mains, such as computer vision (He et al., 2015), natural language processing (Devlin et al., 2019) and dataset distillation (Zhong et al., 2024b;a). However, their integration with sensitive training data has raised concerns about privacy breaches. Recent studies (Fang et al., 2024b;a; 2025) have ', 'modified_lines': 'explored various attack methods to probe such privacy, such as gradient inversion (Fang et al., 2023; ', 'original_lines': 'explored various attack methods to probe these privacy, such as gradient inversion (Fang et al., 2023; ', 'after_paragraph_idx': 3, 'before_paragraph_idx': 3}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'black-box attacks effectively. Existing defenses focus on modifying the weights and structure of the model, but black-box attackers only exploit the outputs, and thus are less susceptible. ', 'paragraph_idx': 4, 'before_section': '1 INTRODUCTION', 'context_before': 'more common. As models grow larger nowadays, they are mostly stored on servers and can only be accessed online, which is a typical black-box scenario. (2) Black-box attacks are more powerful. The latest soft-label attack RLBMI (Han et al., 2023) and hard-label attack LOKT (Nguyen et al., ', 'modified_lines': '2023) have outperformed state-of-the-art white-box attacks. (3) Existing defenses cannot resist ', 'original_lines': '2023) have outperformed the state-of-the-art white-box attacks. (3) Existing defenses cannot resist ', 'after_paragraph_idx': 4, 'before_paragraph_idx': 4}, {'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'vate datasets, and model architectures. 2 RELATED WORKS ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'algorithm by GPU-based water-filling method as well. • Our experiments indicate that we outperform all competitors, in terms of MIA resistance ', 'modified_lines': 'and model’s utility, exhibiting strong generalizability across various attack algorithms, pri- ', 'original_lines': 'and model’s utility, exhibiting good generalizability across various attack algorithms, pri- ', 'after_paragraph_idx': 2, 'before_paragraph_idx': 2}]
2025-05-18 12:06:16
ICLR.cc/2025/Conference
11LaW3RAjD
aVngd4UE3Z
[{'section': 'Abstract', 'after_section': None, 'context_after': 'qi qi ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '⊤zi NON-SCALED : yi = qi ', 'modified_lines': '', 'original_lines': 'SCALED : yi = ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 PRELIMINARIES AND BACKGROUND', 'after_section': None, 'context_after': '⊤Si ⊤Si (5d) (5c) ', 'paragraph_idx': 15, 'before_section': None, 'context_before': 'and ⋆; please refer to Table 5 for detailed choices for different architectures. ', 'modified_lines': 'SCALED : yi = (4d) ', 'original_lines': ' (4d) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '12 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. ', 'modified_lines': '', 'original_lines': ' Shengjie Luo, Shanda Li, Tianle Cai, Di He, Dinglan Peng, Shuxin Zheng, Guolin Ke, Liwei Wang, and Tie-Yan Liu. Stable, fast and accurate: Kernelized attention with relative positional encoding. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '13 Under review as a conference paper at ICLR 2025 702 703 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Liu Yang, Sebastian Ruder, and Donald Metzler. Long range arena: A benchmark for efficient transformers. arXiv preprint arXiv:2011.04006, 2020b. ', 'modified_lines': '', 'original_lines': 'Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Herv´e J´egou. Training data-efficient image transformers & distillation through attention. CoRR, abs/2012.12877, 2020. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Herv´e J´egou. Training data-efficient image transformers & distillation through attention. In International conference on machine learning, pp. 10347–10357. PMLR, 2021. Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, and Ruslan Salakhut- dinov. Transformer dissection: An unified understanding for transformer’s attention via the lens of kernel. In Proceedings of the 2019 Conference on Empirical Methods in Natural Lan- guage Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 4344–4353, 2019. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \\Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, 2018. Sinong Wang, Belinda Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768, 2020. Ross Wightman. Pytorch image models. https://github.com/rwightman/ pytorch-image-models, 2019. Shengqiong Wu, Hao Fei, Leigang Qu, Wei Ji, and Tat-Seng Chua. Next-gpt: Any-to-any multimodal llm. In Forty-first International Conference on Machine Learning, 2024. Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, and Vikas Singh. Nystr¨omformer: A nystr¨om-based algorithm for approximating self-attention. In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI 2021), 2021. Songlin Yang, Bailin Wang, Yikang Shen, Rameswar Panda, and Yoon Kim. Gated linear attention transformers with hardware-efficient training. arXiv preprint arXiv:2312.06635, 2023. Songlin Yang, Bailin Wang, Yu Zhang, Yikang Shen, and Yoon Kim. Parallelizing linear transformers with the delta rule over sequence length. arXiv preprint arXiv:2406.06484, 2024. Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. Big bird: Transformers for longer sequences. In Advances in Neural Information Processing Systems (NeurIPS 2020), 2020. Michael Zhang, Kush Bhatia, Hermann Kumbong, and Christopher R´e. The hedgehog & the porcupine: Expressive linear attentions with softmax mimicry, 2024. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. MiniGPT-4: Enhancing vision-language understanding with advanced large language models. In The Twelfth International Conference on Learning Representations, 2024a. Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, and Xinggang Wang. Vision mamba: Efficient visual representation learning with bidirectional state space model, 2024b. Zhenhai Zhu and Radu Soricut. H-transformer-1d: Fast one-dimensional hierarchical attention for sequences. In Proceedings of the 11th International Joint Conference on Natural Language Processing (IJCNLP 2021), 2021. 14 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 EXPERIMENTS', 'after_section': None, 'context_after': 'com/HazyResearch/m2/blob/main/bert/yamls/finetune-glue/ 30 ', 'paragraph_idx': 65, 'before_section': None, 'context_before': 'for the LARGE model family. In our preliminary experiments, we found that training diverged when using a learning rate of 8 · 10−4 for BERT-LARGE. ', 'modified_lines': 'For completeness, in Table 7 we present the results with the BERT pretraining4 and BERT 24 finetuning5 recipes available in the M2 repository. Finetuning For the GLUE finetuning experiments, we employ four different configurations: • BERT24: Available in Izsak et al. (2021) and the file https://github. hf-transformer-finetune-glue-bert-base-uncased.yaml. 4https://github.com/HazyResearch/m2/blob/main/bert/yamls/pretrain/ hf-transformer-pretrain-bert-base-uncased.yaml 5https://github.com/HazyResearch/m2/blob/main/bert/yamls/finetune-glue/ hf-transformer-finetune-glue-bert-base-uncased.yaml ', 'original_lines': 'Finetuning For the GLUE finetuning experiments, we employ three different configurations: • M2-BASE: Available in Fu et al. (2023), Section C.1 and the file https://github. monarch-mixer-finetune-glue-960dim-parameter-matched.yaml. • M2-LARGE: Available in Fu et al. (2023), Section C.1 and the file https://github. com/HazyResearch/m2/blob/main/bert/yamls/finetune-glue/ monarch-mixer-large-finetune-glue-1792dim-341m-parameters. yaml. • Modified: Same as M2-LARGE but all learning rates are set to 10−5. The recipes are summarized in Appendix D.5. The Modified hyperparameter set was devised as M2-LARGE was found to diverge for BERT-LARGE. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'resolutions. As the results illustrate, the abilities of LION-S can be effectively transferred among different resolutions. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'during inference than the training. However, since the LION-S architecture does not include any positional embeddings, it can be used with different resolutions. In Figure 8, we present the accuracy of the architectures trained on 224 × 224 resolution on the ImageNet dataset at different inference ', 'modified_lines': '', 'original_lines': ' 31 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 Under review as a conference paper at ICLR 2025 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 Model BERT LION-LIT LION-RETNET LION-S BERTLARGE LION-LIT LARGE LION-RETNET LARGE 68.64 LION-S LARGE 69.16 Table 10: Recipe selection for the GLUE benchmark. MLM Acc. Recipe MNLI RTE QQP QNLI SST2 STSB MRPC COLA Avg. 67.70 65.47 66.62 67.05 69.88 67.11 M2-BASE Mod. M2-BASE Mod. M2-BASE Mod. M2-BASE Mod. M2-LARGE Mod. M2-LARGE Mod. M2-LARGE Mod. M2-LARGE Mod. 84.63 83.09 82.50 80.88 82.85 80.52 83.17 78.14 84.97 85.68 83.20 83.73 83.82 83.82 83.71 84.38 64.33 58.27 63.47 54.95 52.49 52.85 53.50 56.39 69.10 67.44 54.51 57.18 52.85 60.72 50.04 57.69 89.99 89.35 89.72 88.80 89.63 88.93 89.35 88.68 31.59 89.90 89.08 89.85 41.48 89.72 38.81 89.57 89.80 89.88 89.27 88.83 88.43 88.36 88.89 88.52 49.15 91.89 84.90 89.93 53.67 89.79 53.98 90.30 92.51 92.16 91.74 91.32 91.86 91.55 93.00 92.39 91.93 93.04 90.44 91.86 91.13 92.93 91.59 92.93 86.69 86.56 87.18 85.42 85.96 82.05 37.73 51.22 53.61 88.63 68.57 88.02 36.87 87.29 36.98 87.68 89.62 87.78 89.37 87.07 83.94 84.48 77.87 77.60 87.87 90.89 85.25 90.18 82.41 89.66 82.29 90.57 60.42 55.02 49.22 46.98 53.58 49.13 53.18 49.75 51.16 56.14 23.35 55.36 45.79 56.83 50.27 59.54 82.25 80.26 80.31 78.03 78.59 77.23 72.09 72.84 64.92 82.95 72.41 80.76 61.00 81.34 60.96 81.58 ', 'after_paragraph_idx': 2, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Table 12: Ablation studies on image classification. Additional ablations with the CIFAR-100 dataset to understand the contribution of softmax, nonlinearities in a model is presented. Soft., PosEmb and ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'experiments on CIFAR-100 data using the same hyperparameters with LION-S. We have observed that either nonlinearity or softmax is essential for the model to converge with a nice accuracy. Though positional embedding boosts the accuracy, a mask can easily replace it. ', 'modified_lines': '', 'original_lines': ' Figure 8: Top-1 accuracy on Imagenet of the models at different resolutions. Images are resized at the corresponding resolution and fed into the model. Due to positional embeddings, ViT and LION-LIT models cannot perform with sizes larger than the training size while LION-S can preserve the accuracy for much higher resolutions. 32 3264112168224280336392448512Resolution010203040506070Top-1AccuracyTrainingResolutionLION-SViT-TLION-LIT Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '• Calculating Y: 2L2D • TOTAL: L(6D2 + 4LD + 2L) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '• Calculating Q, K, V: 6LD2, • Attention A = QKT : 2L2D • Softmax (assuming 1 FLOP for exp): 2L2 ', 'modified_lines': '', 'original_lines': ' 33 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '34 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '• L forward + backward recurrences: 2L(5D2 + 10D + 3) • Calculating Y: 2L(D + 1) • TOTAL: L(16D2 + 24D + 7) ', 'modified_lines': '', 'original_lines': ' D.10 DISTILLATION RESULTS OF LION-S We have also used the same recipe from DeiT distillation Touvron et al. (2021) and distilled the RegNet network into LION-S. We observed that the distillation outperforms the original ViT-Tiny on the ImageNet dataset. The results are shown in the table below: Table 14: Distillation results of LION-S. Models Top-1 Acc. LION-S VIT-Tiny LION-S (Distilled) 67.95 70.23 70.44 D.11 TRAINING TIME FOR DIFFERENT MODELS IN VISION EXPERIMENTS Table 15: Training Time per Epoch for Different Models. Best in bold and second best is in italic form. Training Strategy (Model) Time (s) /Epoch Attention (VIT) Attention (LION-S) Attention (LION-LIT) Parallel Scan (Hydra) 24.6 35.8 26.6 43.4 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-17 23:09:16
ICLR.cc/2025/Conference
aVngd4UE3Z
Mg8wpyIwlD
[{'section': '2 PRELIMINARIES AND BACKGROUND', 'after_section': None, 'context_after': '(5a) (5b) ', 'paragraph_idx': 15, 'before_section': None, 'context_before': '⊤zi NON-SCALED : yi = qi ', 'modified_lines': 'SCALED : yi = ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'qi qi ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'four additional parameters, Λi, γi, βi, αi, along with their corresponding operation functions and ⋆; please refer to Table 5 for detailed choices for different architectures. ', 'modified_lines': '', 'original_lines': ' SCALED : yi = ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-17 23:15:42
ICLR.cc/2025/Conference
Mg8wpyIwlD
1HhRljNOHj
[{'section': '5 EXPERIMENTS', 'after_section': None, 'context_after': 'Table 4: Image classification task results. We present the Top-1 accuracy on the validation data. Model ViT-T ', 'paragraph_idx': 64, 'before_section': '5 EXPERIMENTS', 'context_before': 'When considering the number of tokens (analogous to resolution in images) that Transformer-like architectures can effectively process, two primary limitations emerge: (i) positional embeddings and (ii) memory constraints. Transformers are typically trained up to a specific sequence length and ', 'modified_lines': 'lack predefined positional encodings for tokens that exceed this limit, which can hurt their ability to LION-S shows competitive performance against ViT models. ', 'original_lines': 'LION-S shows competitive performance against ViT models. Results marked with ’-’ are in progress and will be updated for the final version of the paper. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 64}, {'section': 'Abstract', 'after_section': None, 'context_after': 'recognize token positions beyond trained lengths. Furthermore, the quadratic complexity of these models during inference places significant demands on memory resources, often leading to constraints that reduce processing efficiency. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'which is ∼ 94.4% more efficient. Similarly, BERT goes OOM for sequence length 14, 336, while for the same sequence length, LION-S requires less than 15GB of GPU memory. ', 'modified_lines': '', 'original_lines': 'lack predefined positional encodings for tokens that exceed this limit, which can hurt their ability to ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-21 20:01:14
ICLR.cc/2025/Conference
1HhRljNOHj
jnpUSav9yA
[{'section': 'Abstract', 'after_section': None, 'context_after': '(5a) (5b) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '⊤zi NON-SCALED : yi = qi ', 'modified_lines': '', 'original_lines': 'qi qi ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '12 q⊤', 'after_section': None, 'context_after': '(cid:16)(cid:0)F (AB) ⊙ F (MB)(cid:1)FLIP(V) (18) (cid:17) ', 'paragraph_idx': 42, 'before_section': None, 'context_before': 'an L-dimensional exchange matrix, as detailed in Appendix C.4. Thus, the outputs of the forward and backward recurrences can be expressed as follows: ', 'modified_lines': 'Y = (CF + CB)−1( YF + YB ), where ', 'original_lines': 'YF = (AF ⊙ MF )V, YB = (AB ⊙ MB)V = FLIP Y = (CF + CB)−1( YF + YB ), where ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '12 q⊤q⊤', 'after_section': None, 'context_after': '6 ', 'paragraph_idx': 48, 'before_section': '12 q⊤q⊤', 'context_before': '. (19) ', 'modified_lines': ' YF = (AF ⊙ MF )V, YB = (AB ⊙ MB)V = FLIP ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 48}, {'section': '2 PRELIMINARIES AND BACKGROUND', 'after_section': None, 'context_after': 'Transformer as Linear Recurrent ', 'paragraph_idx': 12, 'before_section': None, 'context_before': 'Transformer SSM ', 'modified_lines': ' RNN ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 LION-S: SELECTIVITY INSPIRED FROM CONTINUOUS SYSTEMSThis section outlines the selectivity for the bidirectional recurrent model and proposes LION-S. Asshown in Dao & Gu (2024), transformers can be represented as SSMs through a state-space duality,where the parameters Ci and Bi in the SSM correspond to qi and ki. However, this connectionwas established in the discrete domain. In our work, we explore the transformer recurrence withscaling in the continuous domain before discretizing it, which leads to the recurrence parameterλi. By considering the transformer recurrence in the continuous domain and applying zero-orderhold discretization (Kalman, 1960), we obtain', 'after_section': '4 LION-S: SELECTIVITY INSPIRED FROM CONTINUOUS SYSTEMSThis section outlines the selectivity for the bidirectional recurrent model and proposes LION-S. Asshown in Dao & Gu (2024), transformers can be represented as SSMs through a state-space duality,where the parameters Ci and Bi in the SSM correspond to qi and ki. However, this connectionwas established in the discrete domain. In our work, we explore the transformer recurrence withscaling in the continuous domain before discretizing it, which leads to the recurrence parameterλi. By considering the transformer recurrence in the continuous domain and applying zero-orderhold discretization (Kalman, 1960), we obtain', 'context_after': 'Local Att. Sparse Transformer Longformer ', 'paragraph_idx': 56, 'before_section': None, 'context_before': 'S5 (v1) S5 (v2) Mamba ', 'modified_lines': 'Mamba (From Beck et al. (2024)) LRU xLSTM ', 'original_lines': '', 'after_paragraph_idx': 56, 'before_paragraph_idx': None}]
2024-11-22 15:39:49
ICLR.cc/2025/Conference
jnpUSav9yA
U2A7u5oOHy
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Due to the space constraints, a detailed overview of related work is deferred to Appendix B. Section 2 in the sequel provides the necessary preliminaries on attention, state space model, and linear recurrent ', 'paragraph_idx': 9, 'before_section': '1 INTRODUCTION', 'context_before': 'recurrent models (cf., Appendix B) into their bidirectional counterparts. ', 'modified_lines': '• We propose three main running examples of our framework, inspired by prior work, namely: 1. LION-LIT : Scaled attention without masking, a bidirectional extension of Linear Transformer Katharopoulos et al. (2020). 2. LION-RETNET : Fixed masked scaled attention with scalar and learnable state param- eter γ, an extension of RETNET Sun et al. (2023) into the bidirectional setting. 3. LION-S : Selective masked scaled attention with input-dependent mask λi, inspired by the selectivity of Mamba-2 Dao & Gu (2024). • Through extensive experiments in the Long Range Arena, Vision Tasks, and Masked Language Modeling, we have demonstrated the capabilities of the LION framework and the models built upon it, as outlined above. ', 'original_lines': '• We further propose LION-S, an architecture that builds upon our proposed LION framework and combines the selective mechanism of SSMs. • LION-S shows competitive performance as compared to state-of-the-art vision and masked language transformer models, while being significantly more resource efficient. • We demonstrate that LION-S solves the Long Range Arena (LRA) task, successfully becom- ing the first transformer to employ recurrent inference in this setting. • In contrast to transformers, LION-S requires no additional positional encoding, enabling it to extrapolate beyond the context length or resolution during inference. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 9}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Notation. Matrices (vectors) are symbolized by uppercase (lowercase) boldface letters, e.g., Y and y. The Hadamard product is denoted by ⊙ and ∗ signifies the scalar product. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'conclusions in Section 6. 2 PRELIMINARIES AND BACKGROUND ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 PRELIMINARIES AND BACKGROUNDNotation. Matrices (vectors) are symbolized by uppercase (lowercase) boldface letters, e.g., Y andy. The Hadamard product is denoted by ⊙ and ∗ signifies the scalar product.', 'after_section': None, 'context_after': '(5a) (5b) ', 'paragraph_idx': 15, 'before_section': None, 'context_before': '⊤Si ⊤zi NON-SCALED : yi = qi ', 'modified_lines': ' SCALED : yi = ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'qi qi ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'and ⋆; please refer to Table 5 for detailed choices for different architectures. ', 'modified_lines': '', 'original_lines': 'SCALED : yi = ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Proposition 3.2. Considering the following forward recurrence: i = λiSF ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'We precede our main result with a proposition from Sun et al. (2023), which states that an autoregres- sive transformer can be expressed as a linear recurrent model: ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 ', 'after_section': '1 ', 'context_after': 'where we use for the diagonal elements of the attention matrix and the mask. By splitting (10) into upper and lower triangular forms, we obtain the following: for lower triangular elements, and ', 'paragraph_idx': 30, 'before_section': '1 ', 'context_before': '(10) ', 'modified_lines': 'for upper triangular elements, 2 q⊤ 1 k1 1 k1 q⊤ q⊤ · · · q⊤ 2 q⊤ 1 kL 1 kL 1 k2 1 k1 1 k2 q⊤ q⊤ · · · 1 1 (cid:124) λ2 1 λ2 ... 1 λ1 λ1λ2 ... ', 'original_lines': 'for upper triangular elements, ', 'after_paragraph_idx': 30, 'before_paragraph_idx': 30}, {'section': 'Abstract', 'after_section': None, 'context_after': '(cid:125) q⊤ 2 k1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': ' ', 'modified_lines': '', 'original_lines': '+ (cid:124) 1 2 q⊤ L kL = (cid:125) (cid:124) 1 2 q⊤ 1 k1 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'As in (11) and (12), the attention matrix and mask are split into lower (AF , MF ) and upper triangular (AB, MB) matrices. The scaling operator divides each row of the attention matrix to its summed value, and hence equals to a diagonal matrix C−1 multiplied by the attention: ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2025 ', 'modified_lines': '', 'original_lines': ' (cid:124) 1 λ1 λ1λ2 ... λ2 1 λ2 ... λ2λ3 · · · λ2 · · · λL λ3 1 ... · · · λ3 · · · λL · · · λ4 · · · λL . . . ... λL−1 · · · λ1 λL−1 · · · λ2 λL−1 · · · λ3 · · · 1 (cid:123)(cid:122) M (cid:125) = (cid:124) 1 λ1 1 λ1λ2 ... λ2 ... λL−1 · · · λ1 λL−1 · · · λ2 λL−1 · · · λ3 1 ... (cid:123)(cid:122) MF + (cid:125) (cid:124) . . . · · · 1 1 λ2 λ2λ3 · · · λ2 · · · λL 1 λ3 1 · · · λ3 · · · λL · · · λ4 · · · λL . . . ... 1 (cid:123)(cid:122) MB (cid:125) −I (12) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 PRELIMINARIES AND BACKGROUNDNotation. Matrices (vectors) are symbolized by uppercase (lowercase) boldface letters, e.g., Y andy. The Hadamard product is denoted by ⊙ and ∗ signifies the scalar product.', 'after_section': None, 'context_after': 'j=1 Mijkj + q⊤ i (cid:80)L ', 'paragraph_idx': 12, 'before_section': None, 'context_before': 'i ki, we can similarly split the scaling matrix into two parts as follows: ', 'modified_lines': '(cid:80)i ', 'original_lines': 'j=1 (cid:80)i ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'This section outlines the selectivity for the bidirectional recurrent model and proposes LION-S. As shown in Dao & Gu (2024), transformers can be represented as SSMs through a state-space duality, where the parameters Ci and Bi in the SSM correspond to qi and ki. However, this connection ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'i 4 LION-S: SELECTIVITY INSPIRED FROM CONTINUOUS SYSTEMS ', 'modified_lines': ' ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 LION-S: SELECTIVITY INSPIRED FROM CONTINUOUS SYSTEMS', 'after_section': '4 LION-S: SELECTIVITY INSPIRED FROM CONTINUOUS SYSTEMS', 'context_after': 'where exp(·) is applied element-wise and M can be learned with trainable ai or with selectivity as ai = log(σ(w⊤ ', 'paragraph_idx': 47, 'before_section': None, 'context_before': 'if i > j if i < j if i = j ', 'modified_lines': ' , M = exp(D), (28) ', 'original_lines': '', 'after_paragraph_idx': 47, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2025 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'appeared in (27a) and (27b), we can consider this term as a part of ki. 7 ', 'modified_lines': '', 'original_lines': ' 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'This section illustrates the performance of LION-LIT and -S on well-established benchmarks: Long 5.1 LONG RANGE ARENA ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '86.07 5 EXPERIMENTS ', 'modified_lines': 'Range Arena, masked language modelling, and image classification. Note that thanks to Theorem 3.3, LION-S benefits from the parallelization capabilities built for masked attention during training. We similarly achieve efficient inference through the bidirectional recurrence as also illustrated by Figure 1. Due to the use of ai from (28), LION-S does not require positional encodings and can extrapolate beyond context length during inference, which we will also demonstrate below. ', 'original_lines': ' Range Arena, masked language modelling, and image classification. Note that thanks to Theorem 3.3, LION-S benefits from the parallelization capabilities built for masked attention during training. We similarly achieve efficient inference through the bidirectional recurrence as also illustrated by Figure 1. Due to the use of ai from (28), LION-S does not require positional encodings and can extrapolate beyond context length during inference, which we will also demonstrate below. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '8 ', 'paragraph_idx': 11, 'before_section': None, 'context_before': 'pre-training the models on the C4 dataset (Dodge et al., 2021), followed by fine-tuning and evaluating their downstream performance on the GLUE benchmark (Wang et al., 2018). Both the pre-training and fine-tuning phases employ the M2 hyperparameters (Fu et al., 2023), except for the LARGE ', 'modified_lines': 'models, where learning rates of 2 · 10−4 and 10−5 for pretraining and finetuning were employed ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'LION-S LARGE 69.88 67.11 69.16 85.68 83.73 84.38 67.44 57.18 57.69 89.90 89.85 89.57 91.89 89.93 90.30 93.04 91.86 92.93 88.63 88.02 87.68 90.89 90.18 90.57 56.14 55.36 59.54 82.95 80.76 81.58 for stability based on our results in Appendix D.6. For additional experimental details and results with smaller scaled models, we refer to Appendix D.5 and Appendix D.4 respectively. ', 'paragraph_idx': 9, 'before_section': None, 'context_before': 'BERTLARGE LION-LIT LARGE ', 'modified_lines': 'LION-RETNET 68.64 83.82 60.72 89.72 89.79 92.93 87.29 89.66 56.83 81.34 ', 'original_lines': 'models, where learning rates of 2 · 10−4 and 10−5 for pretraining and finetuning were employed ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '5.3 IMAGE CLASSIFICATION ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'MLM pretraining task and the GLUE finetuning tasks. However, when we test the models beyond the context length used in training, LION greatly retains or even improves the MLM accuracy in comparison to the BERT baseline, see Section 5.4. ', 'modified_lines': ' ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 EXPERIMENTSThis section illustrates the performance of LION-LIT and -S on well-established benchmarks: LongRange Arena, masked language modelling, and image classification. Note that thanks to Theorem 3.3,LION-S benefits from the parallelization capabilities built for masked attention during training. Wesimilarly achieve efficient inference through the bidirectional recurrence as also illustrated by Figure1. Due to the use of ai from (28), LION-S does not require positional encodings and can extrapolatebeyond context length during inference, which we will also demonstrate below.', 'after_section': '5 EXPERIMENTSThis section illustrates the performance of LION-LIT and -S on well-established benchmarks: LongRange Arena, masked language modelling, and image classification. Note that thanks to Theorem 3.3,LION-S benefits from the parallelization capabilities built for masked attention during training. Wesimilarly achieve efficient inference through the bidirectional recurrence as also illustrated by Figure1. Due to the use of ai from (28), LION-S does not require positional encodings and can extrapolatebeyond context length during inference, which we will also demonstrate below.', 'context_after': 'Table 4: Image classification task results. We present the Top-1 accuracy on the validation data. Model ViT-T HYDRA-T LION-LIT LION-S LION-S (v2) ', 'paragraph_idx': 61, 'before_section': '5 EXPERIMENTSThis section illustrates the performance of LION-LIT and -S on well-established benchmarks: LongRange Arena, masked language modelling, and image classification. Note that thanks to Theorem 3.3,LION-S benefits from the parallelization capabilities built for masked attention during training. Wesimilarly achieve efficient inference through the bidirectional recurrence as also illustrated by Figure1. Due to the use of ai from (28), LION-S does not require positional encodings and can extrapolatebeyond context length during inference, which we will also demonstrate below.', 'context_before': 'former and LION-S on ImageNet gets smaller. LION-S (v2) significantly improves the performance of LION-S in all tested scenarios and over ViT-T on CIFAR-100. For further ablations, cf., Appendix D.7. ', 'modified_lines': 'LION-S shows competitive performance against ViT models. * indicates that results are directly copied from paper Zhu et al. (2024b), where the authors are training under a different setup (e.g., with data-augmentation). Vim-T∗ LION-RETNET ', 'original_lines': '5.4 CONTEXT EXTENSION AND MEMORY DURING INFERENCE When considering the number of tokens (analogous to resolution in images) that Transformer-like architectures can effectively process, two primary limitations emerge: (i) positional embeddings and (ii) memory constraints. Transformers are typically trained up to a specific sequence length and lack predefined positional encodings for tokens that exceed this limit, which can hurt their ability to LION-S shows competitive performance against ViT models. ', 'after_paragraph_idx': 61, 'before_paragraph_idx': 60}, {'section': '5 EXPERIMENTSThis section illustrates the performance of LION-LIT and -S on well-established benchmarks: LongRange Arena, masked language modelling, and image classification. Note that thanks to Theorem 3.3,LION-S benefits from the parallelization capabilities built for masked attention during training. Wesimilarly achieve efficient inference through the bidirectional recurrence as also illustrated by Figure1. Due to the use of ai from (28), LION-S does not require positional encodings and can extrapolatebeyond context length during inference, which we will also demonstrate below.', 'after_section': '5 EXPERIMENTSThis section illustrates the performance of LION-LIT and -S on well-established benchmarks: LongRange Arena, masked language modelling, and image classification. Note that thanks to Theorem 3.3,LION-S benefits from the parallelization capabilities built for masked attention during training. Wesimilarly achieve efficient inference through the bidirectional recurrence as also illustrated by Figure1. Due to the use of ai from (28), LION-S does not require positional encodings and can extrapolatebeyond context length during inference, which we will also demonstrate below.', 'context_after': 'recognize token positions beyond trained lengths. Furthermore, the quadratic complexity of these models during inference places significant demands on memory resources, often leading to constraints that reduce processing efficiency. ', 'paragraph_idx': 59, 'before_section': None, 'context_before': 'which is ∼ 94.4% more efficient. Similarly, BERT goes OOM for sequence length 14, 336, while for the same sequence length, LION-S requires less than 15GB of GPU memory. ', 'modified_lines': '5.4 CONTEXT EXTENSION AND MEMORY DURING INFERENCE When considering the number of tokens (analogous to resolution in images) that Transformer-like architectures can effectively process, two primary limitations emerge: (i) positional embeddings and (ii) memory constraints. Transformers are typically trained up to a specific sequence length and lack predefined positional encodings for tokens that exceed this limit, which can hurt their ability to ', 'original_lines': '', 'after_paragraph_idx': 59, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'RTE 1 · 10−5 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '3 · 10−5 3 · 10−6 5 ', 'modified_lines': '', 'original_lines': ' MNLI 5 · 10−5 5 · 10−6 3 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-24 15:47:00
ICLR.cc/2025/Conference
U2A7u5oOHy
IzxBJJxLno
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'for spatial and temporal relationships within input sequences, reducing reliance on heuristic positional embeddings and facilitating straightforward scalability in context length and resolution. Using our framework and inspired by the recent state- ', 'modified_lines': 'space models, we propose three main running examples LION-LIT, LION-RETNET, and LION-S, a transformer with selective mask and recurrent inference. Numerical evaluations on tasks such as language modeling, the Long-Range Arena, and image classification show that LION framework achieves performance on par with state-of-the-art models while delivering fast training and inference efficiency. ', 'original_lines': 'space models, we propose LION-S, a transformer with selective mask and recurrent inference. Numerical evaluations on tasks such as language modeling, the Long- Range Arena, and image classification show that LION-S achieves performance on par with state-of-the-art models while delivering superior inference efficiency. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'original bidirectional Transformer (cf., Observation 3.1). Instead, we propose a novel design, LION, that allows the bidirectional Transformer to be expressed as a bidirectional RNN. Our framework retains the advantages of parallel training found in Transformers, offering bidirectionality in inference ', 'modified_lines': '', 'original_lines': 'while addressing the memory issues inherent in traditional Transformer models. A schematic of the proposed framework LION is visualized in Figure 1. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Table 1: Summary of training and inference strategies. ⇄ represents bidirectionality of the method. Complexity indicates the computational and memory requirements during inference for processing ', 'paragraph_idx': 6, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2025 ', 'modified_lines': ' while addressing the memory issues inherent in traditional Transformer models. A schematic of the proposed framework LION is visualized in Figure 1. Besides the popularity of Transformers and their variants, state space models (SSMs) have emerged as another family of architecture for sequence modeling due to their efficient inference capabilities (Gu et al., 2022; Smith et al., 2023; Gu et al., 2020). The representative works Mamba (Gu & Dao, 2024) and Mamba-2 (Dao & Gu, 2024) have also demonstrated strong performance in language modeling. Building on our bidirectional Transformer theory, LION framework combines the expressive power of bidirectional Transformers with the selective mechanism of Mamba, further enhancing the model’s capability to process long sequences while maintaining computational efficiency. Through this approach, we aim to provide a scalable and efficient solution for tasks that demand both long-range dependency modeling and dense information processing. Overall, our main contributions can be summarized as follows: • We propose a theoretical framework LION (Theorem 3.3), which expresses bidirectional Transformers as bidirectional RNNs, enabling efficient inference for long sequences while benefiting from well-established Transformer training (cf., Table 1). • Our theoretical framework offers the foundations to transform a wide class of autoregressive recurrent models (cf., Appendix B) into their bidirectional counterparts. • We propose three main running examples of our framework, inspired by prior work, namely: 1. LION-LIT : Scaled attention without masking, a bidirectional extension of Linear Transformer Katharopoulos et al. (2020). 2. LION-RETNET : Fixed masked scaled attention with scalar and learnable state param- eter γ, an extension of RETNET Sun et al. (2023) into the bidirectional setting. 3. LION-S : Selective masked scaled attention with input-dependent mask λi, inspired by the selectivity of Mamba-2 Dao & Gu (2024). • Through extensive experiments in the Long Range Arena, Vision Tasks, and Masked Language Modeling, we have demonstrated the capabilities of the LION framework and the models built upon it, as outlined above. Due to the space constraints, a detailed overview of related work is deferred to Appendix B. Section 2 in the sequel provides the necessary preliminaries on attention, state space model, and linear recurrent network. Section 3 then explains our framework LION, and mathematically grounds our concrete contributions. Section 4 describes how to build LION-S by introducing selectivity via discretization of continuous state-space models, which is then followed by numerical evidence in Section 5 and the conclusions in Section 6. 2 PRELIMINARIES AND BACKGROUND Notation. Matrices (vectors) are symbolized by uppercase (lowercase) boldface letters, e.g., Y and y. The Hadamard product is denoted by ⊙ and ∗ signifies the scalar product. Attention. Attention have been a cornerstone of foundation models for several years (Vaswani et al., 2017; Kojima et al., 2022). Given a data sequence x1, x2, . . . , xL, a single-head softmax-attention uses a softmax function to define the attention weights: (qi, ki, vi) = (Wqxi, Wkxi, Wvxi) , yi = i (cid:88) j=1 exp(q⊤ i kj) p=1 exp(q⊤ i kp) (cid:80)i vj, (1) where qi, ki, vi, yi ∈ Rd and the weights Wq, Wk, Wv ∈ Rd×d with d being the projection dimension. With Q := [q1, . . . , qL]⊤, K := [k1, . . . , kL]⊤, V := [v1, . . . , vL]⊤ ∈ RL×d, we can then express the attention as the following matrix form: Y = softmax (cid:0)QK⊤(cid:1) V. Such matrix form is crucial for parallelized training over the sequence length. In contrast, (1) is used during inference for generating or processing tokens. However, for autoregressive transformers (Kojima et al., 2022), employing (1) requires storing the previous L tokens to attend to the latest token during inference. This approach is less efficient than RNNs, where only the state is stored regardless of the previous sequence (cf., Orvieto et al. (2023)). Attention can be generalized via a kernel function κ : Rd × Rd → R (Tsai et al., 2019) as 2 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 Under review as a conference paper at ICLR 2025 Figure 1: (Left) standard Transformer block. (Middle) training mode of LION with the bidirectional Transformer. (Right) inference mode of LION with the bidirectional RNN. Norm refers to Layer normalization, Proj is the projection operation to calculate Q, K, V and λ values, Scale is the scaling operation in Eq. (4), Inv is the inversion operation, A is the linear attention matrix, A = QKT , MF/B are forward/backward recurrence masks, yF/B are forward/backward outputs and cF/B are forward/backward are the scaling coefficients. For further definitions of the architectural elements in LION, please refer to Sections 2 and 3. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '(2) Katharopoulos et al. (2020) introduces Linear Attention which replaces the exponential kernel ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '✓ O(Ld) ', 'modified_lines': '', 'original_lines': 'Besides the popularity of Transformers and their variants, state space models (SSMs) have emerged as another family of architecture for sequence modeling due to their efficient inference capabilities (Gu et al., 2022; Smith et al., 2023; Gu et al., 2020). The representative works Mamba (Gu & Dao, 2024) and Mamba-2 (Dao & Gu, 2024) have also demonstrated strong performance in language modeling. Building on our bidirectional Transformer theory, LION framework combines the expressive power of bidirectional Transformers with the selective mechanism of Mamba, further enhancing the model’s capability to process long sequences while maintaining computational efficiency. Through this approach, we aim to provide a scalable and efficient solution for tasks that demand both long-range dependency modeling and dense information processing. Overall, our main contributions can be summarized as follows: • We propose a theoretical framework LION (Theorem 3.3), which expresses bidirectional Transformers as bidirectional RNNs, enabling efficient inference for long sequences while benefiting from well-established Transformer training (cf., Table 1). • Our theoretical framework offers the foundations to transform a wide class of autoregressive recurrent models (cf., Appendix B) into their bidirectional counterparts. • We propose three main running examples of our framework, inspired by prior work, namely: 1. LION-LIT : Scaled attention without masking, a bidirectional extension of Linear Transformer Katharopoulos et al. (2020). 2. LION-RETNET : Fixed masked scaled attention with scalar and learnable state param- eter γ, an extension of RETNET Sun et al. (2023) into the bidirectional setting. 3. LION-S : Selective masked scaled attention with input-dependent mask λi, inspired by the selectivity of Mamba-2 Dao & Gu (2024). • Through extensive experiments in the Long Range Arena, Vision Tasks, and Masked Language Modeling, we have demonstrated the capabilities of the LION framework and the models built upon it, as outlined above. Due to the space constraints, a detailed overview of related work is deferred to Appendix B. Section 2 in the sequel provides the necessary preliminaries on attention, state space model, and linear recurrent network. Section 3 then explains our framework LION, and mathematically grounds our concrete contributions. Section 4 describes how to build LION-S by introducing selectivity via discretization of continuous state-space models, which is then followed by numerical evidence in Section 5 and the conclusions in Section 6. 2 PRELIMINARIES AND BACKGROUND Notation. Matrices (vectors) are symbolized by uppercase (lowercase) boldface letters, e.g., Y and y. The Hadamard product is denoted by ⊙ and ∗ signifies the scalar product. 2 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 Under review as a conference paper at ICLR 2025 Figure 1: (Left) standard Transformer block. (Middle) training mode of LION with the bidirectional Transformer. (Right) inference mode of LION with the bidirectional RNN. Norm refers to Layer normalization, Proj is the projection operation to calculate Q, K, V and λ values, Scale is the scaling operation in Eq. (4), Inv is the inversion operation, A is the linear attention matrix, A = QKT , MF/B are forward/backward recurrence masks, yF/B are forward/backward outputs and cF/B are forward/backward are the scaling coefficients. For further definitions of the architectural elements in LION, please refer to Sections 2 and 3. Attention. Attention have been a cornerstone of foundation models for several years (Vaswani et al., 2017; Kojima et al., 2022). Given a data sequence x1, x2, . . . , xL, a single-head softmax-attention uses a softmax function to define the attention weights: (qi, ki, vi) = (Wqxi, Wkxi, Wvxi) , yi = i (cid:88) j=1 exp(q⊤ i kj) p=1 exp(q⊤ i kp) (cid:80)i vj, (1) where qi, ki, vi, yi ∈ Rd and the weights Wq, Wk, Wv ∈ Rd×d with d being the projection dimension. With Q := [q1, . . . , qL]⊤, K := [k1, . . . , kL]⊤, V := [v1, . . . , vL]⊤ ∈ RL×d, we can then express the attention as the following matrix form: Y = softmax (cid:0)QK⊤(cid:1) V. Such matrix form is crucial for parallelized training over the sequence length. In contrast, (1) is used during inference for generating or processing tokens. However, for autoregressive transformers (Kojima et al., 2022), employing (1) requires storing the previous L tokens to attend to the latest token during inference. This approach is less efficient than RNNs, where only the state is stored regardless of the previous sequence (cf., Orvieto et al. (2023)). Attention can be generalized via a kernel function κ : Rd × Rd → R (Tsai et al., 2019) as yi = i (cid:88) j=1 κ(qi, kj) p=1 κ(qi, kp) (cid:80)i vj . ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '3 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'κ(qi, kj) = exp(q⊤ a higher-dimensional space. For simplicity of notation, we use qi := ϕ(Wqxi) and similarly for ki := ϕ(Wkxi) in the sequel. This approach enables the transformer to be framed as an RNN with ', 'modified_lines': '', 'original_lines': 'linear recurrence 1, as shown in (4). This formulation eliminates the need to store previous tokens during inference, while still maintaining a parallelized form for training. State Space Models. Inspired by continuous-time systems, state space models (SSMs) have emerged as alternative sequence models. These models project tokens into a state space representation, and 1However, softmax based attention due to applying non-linearity into the attention formulation can not be linearized in this form ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'learn the discretized parameters ( ¯A, ¯B, and ¯C) of the continuous SSM (A(t), B(t), and C(t)) (Gu et al., 2022; Smith et al., 2023; Gu et al., 2020). Recent SSMs designed for language modeling, such as Mamba (Gu & Dao, 2024), use input-dependent matrices ¯Ai, ¯Bi, and ¯Ci, showing strong ', 'paragraph_idx': 8, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2025 ', 'modified_lines': 'linear recurrence 1, as shown in (4). This formulation eliminates the need to store previous tokens during inference, while still maintaining a parallelized form for training. State Space Models. Inspired by continuous-time systems, state space models (SSMs) have emerged as alternative sequence models. These models project tokens into a state space representation, and ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Observation 3.1. First, we observe that the combination of the forward and backward recurrences of the linear recurrent model cannot yield the attention. Consider the following bidirectional recurrence equations: ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'setting with equivalence to the scaled attention. We introduce LION, a bidirectional sequence-to- sequence framework equivalent to attention that benefits from attention parallelization during training and achieves fast linear recurrence during inference. ', 'modified_lines': ' ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'recurrence) attention scores are scaled individually, resulting in tokens not being properly scaled relative to the keys of other tokens in the sequence, unlike attention shown in Figure 2, parts a2 and b2. We precede our main result with a proposition from Sun et al. (2023), which states that an autoregres- sive transformer can be expressed as a linear recurrent model: Proposition 3.2. Considering the following forward recurrence: i = λiSF ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'are at Appendix C.1). These key differences can be described as follows: (i) The diagonal elements representing attention for each token appear in both recurrences, leading to twice the attention score for a token and itself compared to others. (ii) Causal (forward recurrence) and non-causal (backward ', 'modified_lines': ' ', 'original_lines': ' 4 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 Under review as a conference paper at ICLR 2025 Figure 2: Differences between attention and the addition of two linear recurrent models. a1) Addition of two linear transformers, a2) Attention with scaling, b1) Addition of two linear recurrent models, b2) Masked attention with scaling. The red text highlights the differences between attention and the summed recurrent models. We use for the non-causal (backward recurrence), and for the diagonal part of the attention. for the causal (forward recurrence), ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 PRELIMINARIES AND BACKGROUNDNotation. Matrices (vectors) are symbolized by uppercase (lowercase) boldface letters, e.g., Y andy. The Hadamard product is denoted by ⊙ and ∗ signifies the scalar product.', 'after_section': None, 'context_after': 'Our goal (SCALE(QK⊤ ⊙ M)), as this framework is more generalized and can be adapted to vari- ous linear recurrent models (more detail on different variation like scaling prior to masking SCALE(QK⊤) ⊙ M are provided at Appendix C.1). Motivated by (9) and the observation ', 'paragraph_idx': 11, 'before_section': None, 'context_before': 'is to derive a bidirectional ', 'modified_lines': 'recurrence for attention with scaling and ', 'original_lines': 'recurrence for attention with scaling and ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 LION: EXPANDING FULL ATTENTION TO BIDIRECTIONAL RNN', 'after_section': None, 'context_after': 'yF i i + cB cF ', 'paragraph_idx': 21, 'before_section': None, 'context_before': '1 2 ', 'modified_lines': 'i + yB ', 'original_lines': 'i + yB ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'This section outlines the selectivity for the bidirectional recurrent model and proposes LION-S. As shown in Dao & Gu (2024), transformers can be represented as SSMs through a state-space duality, where the parameters Ci and Bi in the SSM correspond to qi and ki. However, this connection ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'i 4 LION-S: SELECTIVITY INSPIRED FROM CONTINUOUS SYSTEMS ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 LION-S: SELECTIVITY INSPIRED FROM CONTINUOUS SYSTEMSThis section outlines the selectivity for the bidirectional recurrent model and proposes LION-S. Asshown in Dao & Gu (2024), transformers can be represented as SSMs through a state-space duality,where the parameters Ci and Bi in the SSM correspond to qi and ki. However, this connectionwas established in the discrete domain. In our work, we explore the transformer recurrence withscaling in the continuous domain before discretizing it, which leads to the recurrence parameterλi. By considering the transformer recurrence in the continuous domain and applying zero-orderhold discretization (Kalman, 1960), we obtain', 'after_section': None, 'context_after': ' 0 ', 'paragraph_idx': 49, 'before_section': None, 'context_before': 'k=i ak k=i+1 ak ', 'modified_lines': 'Dij = ', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 LION-S: SELECTIVITY INSPIRED FROM CONTINUOUS SYSTEMSThis section outlines the selectivity for the bidirectional recurrent model and proposes LION-S. Asshown in Dao & Gu (2024), transformers can be represented as SSMs through a state-space duality,where the parameters Ci and Bi in the SSM correspond to qi and ki. However, this connectionwas established in the discrete domain. In our work, we explore the transformer recurrence withscaling in the continuous domain before discretizing it, which leads to the recurrence parameterλi. By considering the transformer recurrence in the continuous domain and applying zero-orderhold discretization (Kalman, 1960), we obtain', 'after_section': None, 'context_after': '5 EXPERIMENTS This section illustrates the performance of LION-LIT and -S on well-established benchmarks: Long Range Arena, masked language modelling, and image classification. Note that thanks to Theorem 3.3, LION-S benefits from the parallelization capabilities built for masked attention during training. We ', 'paragraph_idx': 49, 'before_section': None, 'context_before': '50.41 86.07 ', 'modified_lines': 'C.7. Adding the selective parameter ai into the LION framework introduces LION-S a bidirectional selective transformer with recurrence inference. Importantly, due to the use of the recurrence parameter ai, LION-S does not require any additional positional encoding, enabling it to extrapolate beyond the context length or resolution during inference. In addition to its connection to continuous systems, the matrix D can be computed using a prefix sum algorithm (Blelloch, 1990), allowing for the summation of ai values in O(log(L)) time, after which it can be exponentiated to derive the mask M. Note that, as the same parameter eai − 1 has appeared in (27a) and (27b), we can consider this term as a part of ki. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'for stability based on our results in Appendix D.6. For additional experimental details and results with smaller scaled models, we refer to Appendix D.5 and Appendix D.4 respectively. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'their downstream performance on the GLUE benchmark (Wang et al., 2018). Both the pre-training and fine-tuning phases employ the M2 hyperparameters (Fu et al., 2023), except for the LARGE models, where learning rates of 2 · 10−4 and 10−5 for pretraining and finetuning were employed ', 'modified_lines': '', 'original_lines': ' 8 Under review as a conference paper at ICLR 2025 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 Table 3: C4 Masked Language Modelling and GLUE results. For each column (dataset), the best and the second best results for each model size are highlighted with bold and underline respectively. Model MLM Acc. MNLI RTE QQP QNLI SST2 STSB MRPC COLA Avg. BERTLARGE LION-LIT LARGE LION-RETNET LION-S LARGE 69.88 67.11 68.64 69.16 85.68 83.73 83.82 84.38 67.44 57.18 60.72 57.69 89.90 89.85 89.72 89.57 91.89 89.93 89.79 90.30 93.04 91.86 92.93 92.93 88.63 88.02 87.29 87.68 90.89 90.18 89.66 90.57 56.14 55.36 56.83 59.54 82.95 80.76 81.34 81.58 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Table 4: Image classification task results. We present the Top-1 accuracy on the validation data. LION-S shows competitive performance against ViT models. * indicates that results are directly copied from paper Zhu et al. (2024b), where the authors are training under a different setup (e.g., ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'the softmax, added the nonlinearity ϕ(x) = elu(x) + 1, and scaling. For Hydra we consider the original hyperparameters and we modify the number of layers to match the parameter size of ViT-T. ', 'modified_lines': '', 'original_lines': 'Table 4 presents the Top-1 accuracy of models on each dataset. While LION-LIT and LION-S have the same complexity, LION-S architecture significantly outperforms the LION-LIT model in each task. ViT-T, which has a quadratic complexity performs slightly better than LION-S on ImageNet and worse in other datasets. When considering a larger ViT-Small (21.7M) parameters, this gap between trans- former and LION-S on ImageNet gets smaller. LION-S (v2) significantly improves the performance of LION-S in all tested scenarios and over ViT-T on CIFAR-100. For further ablations, cf., Appendix D.7. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2025 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '592 593 ', 'modified_lines': '', 'original_lines': 'REFERENCES Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. Benedikt Alkin, Maximilian Beck, Korbinian P¨oppel, Sepp Hochreiter, and Johannes Brandstetter. Vision-LSTM: xLSTM as generic vision backbone. arXiv preprint arXiv:2406.04303, 2024. Simran Arora, Sabri Eyuboglu, Michael Zhang, Aman Timalsina, Silas Alberti, Dylan Zinsley, James Zou, Atri Rudra, and Christopher R´e. Simple linear attention language models balance the recall-throughput tradeoff. arXiv preprint arXiv:2402.18668, 2024. Jimmy Lei Ba. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. Maximilian Beck, Korbinian P¨oppel, Markus Spanring, Andreas Auer, Oleksandra Prudnikova, Michael Kopp, G¨unter Klambauer, Johannes Brandstetter, and Sepp Hochreiter. xlstm: Extended long short-term memory. arXiv preprint arXiv:2405.04517, 2024. Iz Beltagy, Matthew E. Peters, and Arman Cohan. Longformer: The long-document transformer. arXiv:2004.05150, 2020. Guy E Blelloch. Prefix sums and their applications. 1990. Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. 2020. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. CoRR, abs/1904.10509, 2019. Krzysztof Marcin Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Quincy Davis, Afroz Mohiuddin, Lukasz Kaiser, David Ben- jamin Belanger, Lucy J Colwell, and Adrian Weller. Rethinking attention with performers. In International Conference on Learning Representations, 2021. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2978–2988, 2019. Tri Dao and Albert Gu. Transformers are ssms: Generalized models and efficient algorithms through structured state space duality. In Forty-first International Conference on Machine Learning, 2024. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019. Jesse Dodge, Maarten Sap, Ana Marasovi´c, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. Documenting large webtext corpora: A case study on the colossal clean crawled corpus. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. Jeffrey L Elman. Finding structure in time. Cognitive science, 14(2):179–211, 1990. Daniel Y Fu, Simran Arora, Jessica Grogan, Isys Johnson, Sabri Eyuboglu, Armin W Thomas, Benjamin Spector, Michael Poli, Atri Rudra, and Christopher R´e. Monarch mixer: A simple sub-quadratic gemm-based architecture. In Advances in Neural Information Processing Systems, 2023. 11 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 EXPERIMENTS', 'after_section': None, 'context_after': '810 811 ', 'paragraph_idx': 59, 'before_section': None, 'context_before': '2https://github.com/karpathy/nanoGPT ', 'modified_lines': '16 ', 'original_lines': '15 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 01000200030004000SequenceLength101520253035PerplexityGPT-2SequenceLength18.25Avg.LION-S(1D)GPT-2LinAtt Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '19 972 973 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'i and yF ', 'modified_lines': '', 'original_lines': '18 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 Under review as a conference paper at ICLR 2025 Figure 6: Memory allocation in LION during Forward and Backward recurrences. The efficient way of re-using the memory during inference is explained. B.6 ZERO-ORDER HOLD DISCRETIZATION Below we explain the zero-order hold discretization derived by Kalman (1960). An LTI system can be represented with the equation: ˙h(t) = Ah(t) + Bx(t), which can be rearranged to isolate h(t): . By multiplying the equation by e−At, we get ˙h(t) − Ah(t) = Bx(t). e−At ˙h(t) − e−AtAh(t) = e−AtBx(t) Since ∂ ∂t eAt = AeAt = eAtA, Eq. (35) can be written as: After integrating both sides and simplifications, we get ∂ ∂t (cid:0)e−Ath(t)(cid:1) = e−AtBx(t). e−Ath(t) = (cid:90) t 0 e−Aτ Bx(τ ) dτ + h(0). (33) (34) (35) (36) (37) By multiplying both sides by eAt to isolate h(t) and performing further simplifications, at the end we get h(t) = eAt e−Aτ Bx(τ ) dτ + eAth(0). (38) To discretize this solution, we can assume sampling the system at even intervals, i.e. each sample is at kT for some time step T , and that the input x(t) is constant between samples. To simplify the notation, we can define hk in terms of h(kT ) such that 0 Using the new notation, Eq. (38) becomes hk = h(kT ). hk = eAkT h(0) + eAkT (cid:90) kT 0 e−Aτ Bx(τ ) dτ. Now we want to express the system in the form: hk+1 = ˜Ahk + ˜Bxk. To start, let’s write out the equation for xk+1 as hk+1 = eA(k+1)T h(0) + eA(k+1)T (cid:90) (k+1)T 0 e−Aτ Bx(τ ) dτ. (39) (40) (41) (42) (cid:90) t ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 EXPERIMENTS', 'after_section': None, 'context_after': 'L (cid:88) ', 'paragraph_idx': 59, 'before_section': None, 'context_before': 'j=1 ', 'modified_lines': '24 ', 'original_lines': '23 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '1242 1243 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(86) (87) ', 'modified_lines': '', 'original_lines': ' 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 Under review as a conference paper at ICLR 2025 Mij = Πi+1 k=jλk k=i+1λk Πj 1 i > j i < j i = j = 1 λ1 λ1λ2 ... λ2 1 λ2 ... λ2λ3 · · · λ2 · · · λL λ3 1 ... · · · λ3 · · · λL · · · λ4 · · · λL . . . ... λL−1 · · · λ1 λL−1 · · · λ2 λL−1 · · · λ3 · · · 1 The above mask is equal to MF + MB − I, allowing equation (86) to be rewritten as: i + yB yF i = q⊤ i ( i (cid:88) j=1 L (cid:88) = q⊤ i ( j=1 L (cid:88) = q⊤ i ( j=1 MF ijkjv⊤ j + L (cid:88) j=i MB ijkjv⊤ j ) − qi ⊤kivi Mijkjv⊤ j ) + qi ⊤kivi − qi ⊤kivi Mijkjv⊤ j ) So we can finally find the output of each layer yi as: yi = yF i + yB i cF i + cB i Equation (91) −−−−−−−→ yi = i ((cid:80)L q⊤ j=1 Mijkjv⊤ j ) cF i + cB i It can easily be shown that: i (cid:88) i = q⊤ cF i ( j=1 kj) − 1 2 q⊤ i ki , cB i = q⊤ i ( L (cid:88) j=i kj) − 1 2 q⊤ i ki ⇒ cF i + cB i = q⊤ i ( L (cid:88) j=1 ⇒ cF i + cB i = q⊤ i ( L (cid:88) kj) + q⊤ i ki − 1 2 q⊤ i ki − 1 2 q⊤ i ki kj) + q⊤ i ki − q⊤ i ki = q⊤ i ( L (cid:88) kj) = q⊤ i zL j=1 j=1 So the final output of the layer is: yi = i + yB yF i i + cB cF i = i ((cid:80)L j=1 Mijkjv⊤ q⊤ j ) i ((cid:80)L q⊤ j=1 kj) Alternatively, in vectorized form, it can be expressed as: Y = YF + YB = (cid:0)SCALE(QK⊤) ⊙ M(cid:1)V with M being the attention mask created by λis as in equation 88. 24 (88) (89) (90) (91) (92) (93) (94) (95) (96) (97) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 LION-S: SELECTIVITY INSPIRED FROM CONTINUOUS SYSTEMSThis section outlines the selectivity for the bidirectional recurrent model and proposes LION-S. Asshown in Dao & Gu (2024), transformers can be represented as SSMs through a state-space duality,where the parameters Ci and Bi in the SSM correspond to qi and ki. However, this connectionwas established in the discrete domain. In our work, we explore the transformer recurrence withscaling in the continuous domain before discretizing it, which leads to the recurrence parameterλi. By considering the transformer recurrence in the continuous domain and applying zero-orderhold discretization (Kalman, 1960), we obtain', 'after_section': '4 LION-S: SELECTIVITY INSPIRED FROM CONTINUOUS SYSTEMSThis section outlines the selectivity for the bidirectional recurrent model and proposes LION-S. Asshown in Dao & Gu (2024), transformers can be represented as SSMs through a state-space duality,where the parameters Ci and Bi in the SSM correspond to qi and ki. However, this connectionwas established in the discrete domain. In our work, we explore the transformer recurrence withscaling in the continuous domain before discretizing it, which leads to the recurrence parameterλi. By considering the transformer recurrence in the continuous domain and applying zero-orderhold discretization (Kalman, 1960), we obtain', 'context_after': 'PathX Avg. 16K ', 'paragraph_idx': 49, 'before_section': None, 'context_before': '3https://github.com/lindermanlab/S5 ', 'modified_lines': '30 ', 'original_lines': '29 ', 'after_paragraph_idx': 49, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '1566 1567 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '97.99 ', 'modified_lines': '', 'original_lines': '1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-24 15:49:54
ICLR.cc/2025/Conference
IzxBJJxLno
jpCkNrtU48
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'In this paper, we answer this question affirmatively. Indeed, we demonstrate that applying two linear attention mechanisms simply in opposite directions and then summing them does not recover the original bidirectional Transformer (cf., Observation 3.1). Instead, we propose a novel design, LION, that allows the bidirectional Transformer to be expressed as a bidirectional RNN. Our framework retains the advantages of parallel training found in Transformers, offering bidirectionality in inference 1 ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': 'efficiency of RNNs, a natural question arises: Is a bidirectional Transformer actually a bidirectional RNN? ', 'modified_lines': 'while addressing the memory issues inherent in traditional Transformer models. A schematic of the proposed framework LION is visualized in Figure 1. ', 'original_lines': ' ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 6}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Table 1: Summary of training and inference strategies. ⇄ represents bidirectionality of the method. Complexity indicates the computational and memory requirements during inference for processing ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2025 ', 'modified_lines': '', 'original_lines': ' while addressing the memory issues inherent in traditional Transformer models. A schematic of the proposed framework LION is visualized in Figure 1. Besides the popularity of Transformers and their variants, state space models (SSMs) have emerged as another family of architecture for sequence modeling due to their efficient inference capabilities (Gu et al., 2022; Smith et al., 2023; Gu et al., 2020). The representative works Mamba (Gu & Dao, 2024) and Mamba-2 (Dao & Gu, 2024) have also demonstrated strong performance in language modeling. Building on our bidirectional Transformer theory, LION framework combines the expressive power of bidirectional Transformers with the selective mechanism of Mamba, further enhancing the model’s capability to process long sequences while maintaining computational efficiency. Through this approach, we aim to provide a scalable and efficient solution for tasks that demand both long-range dependency modeling and dense information processing. Overall, our main contributions can be summarized as follows: • We propose a theoretical framework LION (Theorem 3.3), which expresses bidirectional Transformers as bidirectional RNNs, enabling efficient inference for long sequences while benefiting from well-established Transformer training (cf., Table 1). • Our theoretical framework offers the foundations to transform a wide class of autoregressive recurrent models (cf., Appendix B) into their bidirectional counterparts. • We propose three main running examples of our framework, inspired by prior work, namely: 1. LION-LIT : Scaled attention without masking, a bidirectional extension of Linear Transformer Katharopoulos et al. (2020). 2. LION-RETNET : Fixed masked scaled attention with scalar and learnable state param- eter γ, an extension of RETNET Sun et al. (2023) into the bidirectional setting. 3. LION-S : Selective masked scaled attention with input-dependent mask λi, inspired by the selectivity of Mamba-2 Dao & Gu (2024). • Through extensive experiments in the Long Range Arena, Vision Tasks, and Masked Language Modeling, we have demonstrated the capabilities of the LION framework and the models built upon it, as outlined above. Due to the space constraints, a detailed overview of related work is deferred to Appendix B. Section 2 in the sequel provides the necessary preliminaries on attention, state space model, and linear recurrent network. Section 3 then explains our framework LION, and mathematically grounds our concrete contributions. Section 4 describes how to build LION-S by introducing selectivity via discretization of continuous state-space models, which is then followed by numerical evidence in Section 5 and the conclusions in Section 6. 2 PRELIMINARIES AND BACKGROUND Notation. Matrices (vectors) are symbolized by uppercase (lowercase) boldface letters, e.g., Y and y. The Hadamard product is denoted by ⊙ and ∗ signifies the scalar product. Attention. Attention have been a cornerstone of foundation models for several years (Vaswani et al., 2017; Kojima et al., 2022). Given a data sequence x1, x2, . . . , xL, a single-head softmax-attention uses a softmax function to define the attention weights: (qi, ki, vi) = (Wqxi, Wkxi, Wvxi) , yi = i (cid:88) j=1 exp(q⊤ i kj) p=1 exp(q⊤ i kp) (cid:80)i vj, (1) where qi, ki, vi, yi ∈ Rd and the weights Wq, Wk, Wv ∈ Rd×d with d being the projection dimension. With Q := [q1, . . . , qL]⊤, K := [k1, . . . , kL]⊤, V := [v1, . . . , vL]⊤ ∈ RL×d, we can then express the attention as the following matrix form: Y = softmax (cid:0)QK⊤(cid:1) V. Such matrix form is crucial for parallelized training over the sequence length. In contrast, (1) is used during inference for generating or processing tokens. However, for autoregressive transformers (Kojima et al., 2022), employing (1) requires storing the previous L tokens to attend to the latest token during inference. This approach is less efficient than RNNs, where only the state is stored regardless of the previous sequence (cf., Orvieto et al. (2023)). Attention can be generalized via a kernel function κ : Rd × Rd → R (Tsai et al., 2019) as 2 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 Under review as a conference paper at ICLR 2025 Figure 1: (Left) standard Transformer block. (Middle) training mode of LION with the bidirectional Transformer. (Right) inference mode of LION with the bidirectional RNN. Norm refers to Layer normalization, Proj is the projection operation to calculate Q, K, V and λ values, Scale is the scaling operation in Eq. (4), Inv is the inversion operation, A is the linear attention matrix, A = QKT , MF/B are forward/backward recurrence masks, yF/B are forward/backward outputs and cF/B are forward/backward are the scaling coefficients. For further definitions of the architectural elements in LION, please refer to Sections 2 and 3. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Inference Memory ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'O(Ld2) O(Ld2) O(Ld2) ', 'modified_lines': '', 'original_lines': ' yi = i (cid:88) j=1 κ(qi, kj) p=1 κ(qi, kp) (cid:80)i vj . ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '(2) Katharopoulos et al. (2020) introduces Linear Attention which replaces the exponential kernel ', 'paragraph_idx': 9, 'before_section': None, 'context_before': '✓ O(Ld) ', 'modified_lines': 'Besides the popularity of Transformers and their variants, state space models (SSMs) have emerged as another family of architecture for sequence modeling due to their efficient inference capabilities (Gu et al., 2022; Smith et al., 2023; Gu et al., 2020). The representative works Mamba (Gu & Dao, 2024) and Mamba-2 (Dao & Gu, 2024) have also demonstrated strong performance in language modeling. Building on our bidirectional Transformer theory, LION framework combines the expressive power of bidirectional Transformers with the selective mechanism of Mamba, further enhancing the model’s capability to process long sequences while maintaining computational efficiency. Through this approach, we aim to provide a scalable and efficient solution for tasks that demand both long-range dependency modeling and dense information processing. Overall, our main contributions can be summarized as follows: • We propose a theoretical framework LION (Theorem 3.3), which expresses bidirectional Transformers as bidirectional RNNs, enabling efficient inference for long sequences while benefiting from well-established Transformer training (cf., Table 1). • Our theoretical framework offers the foundations to transform a wide class of autoregressive recurrent models (cf., Appendix B) into their bidirectional counterparts. • We propose three main running examples of our framework, inspired by prior work, namely: 1. LION-LIT : Scaled attention without masking, a bidirectional extension of Linear Transformer Katharopoulos et al. (2020). 2. LION-RETNET : Fixed masked scaled attention with scalar and learnable state param- eter γ, an extension of RETNET Sun et al. (2023) into the bidirectional setting. 3. LION-S : Selective masked scaled attention with input-dependent mask λi, inspired by the selectivity of Mamba-2 Dao & Gu (2024). • Through extensive experiments in the Long Range Arena, Vision Tasks, and Masked Language Modeling, we have demonstrated the capabilities of the LION framework and the models built upon it, as outlined above. Due to the space constraints, a detailed overview of related work is deferred to Appendix B. Section 2 in the sequel provides the necessary preliminaries on attention, state space model, and linear recurrent network. Section 3 then explains our framework LION, and mathematically grounds our concrete contributions. Section 4 describes how to build LION-S by introducing selectivity via discretization of continuous state-space models, which is then followed by numerical evidence in Section 5 and the conclusions in Section 6. 2 PRELIMINARIES AND BACKGROUND Notation. Matrices (vectors) are symbolized by uppercase (lowercase) boldface letters, e.g., Y and y. The Hadamard product is denoted by ⊙ and ∗ signifies the scalar product. 2 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 Under review as a conference paper at ICLR 2025 Figure 1: (Left) standard Transformer block. (Middle) training mode of LION with the bidirectional Transformer. (Right) inference mode of LION with the bidirectional RNN. Norm refers to Layer normalization, Proj is the projection operation to calculate Q, K, V and λ values, Scale is the scaling operation in Eq. (4), Inv is the inversion operation, A is the linear attention matrix, A = QKT , MF/B are forward/backward recurrence masks, yF/B are forward/backward outputs and cF/B are forward/backward are the scaling coefficients. For further definitions of the architectural elements in LION, please refer to Sections 2 and 3. Attention. Attention have been a cornerstone of foundation models for several years (Vaswani et al., 2017; Kojima et al., 2022). Given a data sequence x1, x2, . . . , xL, a single-head softmax-attention uses a softmax function to define the attention weights: (qi, ki, vi) = (Wqxi, Wkxi, Wvxi) , yi = i (cid:88) j=1 exp(q⊤ i kj) p=1 exp(q⊤ i kp) (cid:80)i vj, (1) where qi, ki, vi, yi ∈ Rd and the weights Wq, Wk, Wv ∈ Rd×d with d being the projection dimension. With Q := [q1, . . . , qL]⊤, K := [k1, . . . , kL]⊤, V := [v1, . . . , vL]⊤ ∈ RL×d, we can then express the attention as the following matrix form: Y = softmax (cid:0)QK⊤(cid:1) V. Such matrix form is crucial for parallelized training over the sequence length. In contrast, (1) is used during inference for generating or processing tokens. However, for autoregressive transformers (Kojima et al., 2022), employing (1) requires storing the previous L tokens to attend to the latest token during inference. This approach is less efficient than RNNs, where only the state is stored regardless of the previous sequence (cf., Orvieto et al. (2023)). Attention can be generalized via a kernel function κ : Rd × Rd → R (Tsai et al., 2019) as yi = i (cid:88) j=1 κ(qi, kj) p=1 κ(qi, kp) (cid:80)i vj . ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': '3 ', 'paragraph_idx': 11, 'before_section': None, 'context_before': 'κ(qi, kj) = exp(q⊤ a higher-dimensional space. For simplicity of notation, we use qi := ϕ(Wqxi) and similarly for ki := ϕ(Wkxi) in the sequel. This approach enables the transformer to be framed as an RNN with ', 'modified_lines': 'linear recurrence 1, as shown in (4). This formulation eliminates the need to store previous tokens during inference, while still maintaining a parallelized form for training. State Space Models. Inspired by continuous-time systems, state space models (SSMs) have emerged as alternative sequence models. These models project tokens into a state space representation, and 1However, softmax based attention due to applying non-linearity into the attention formulation can not be linearized in this form ', 'original_lines': '', 'after_paragraph_idx': 11, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'learn the discretized parameters ( ¯A, ¯B, and ¯C) of the continuous SSM (A(t), B(t), and C(t)) (Gu et al., 2022; Smith et al., 2023; Gu et al., 2020). Recent SSMs designed for language modeling, such as Mamba (Gu & Dao, 2024), use input-dependent matrices ¯Ai, ¯Bi, and ¯Ci, showing strong ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2025 ', 'modified_lines': '', 'original_lines': 'linear recurrence 1, as shown in (4). This formulation eliminates the need to store previous tokens during inference, while still maintaining a parallelized form for training. State Space Models. Inspired by continuous-time systems, state space models (SSMs) have emerged as alternative sequence models. These models project tokens into a state space representation, and ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 PRELIMINARIES AND BACKGROUNDNotation. Matrices (vectors) are symbolized by uppercase (lowercase) boldface letters, e.g., Y andy. The Hadamard product is denoted by ⊙ and ∗ signifies the scalar product.', 'after_section': None, 'context_after': '(5a) (5b) ', 'paragraph_idx': 15, 'before_section': None, 'context_before': '⊤Si ⊤zi NON-SCALED : yi = qi ', 'modified_lines': ' SCALED : yi = ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'qi qi ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'and ⋆; please refer to Table 5 for detailed choices for different architectures. ', 'modified_lines': '', 'original_lines': 'SCALED : yi = ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Observation 3.1. First, we observe that the combination of the forward and backward recurrences of the linear recurrent model cannot yield the attention. Consider the following bidirectional recurrence equations: ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'setting with equivalence to the scaled attention. We introduce LION, a bidirectional sequence-to- sequence framework equivalent to attention that benefits from attention parallelization during training and achieves fast linear recurrence during inference. ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 LION: EXPANDING FULL ATTENTION TO BIDIRECTIONAL RNN', 'after_section': None, 'context_after': 'i + yB 4 ', 'paragraph_idx': 20, 'before_section': '3 LION: EXPANDING FULL ATTENTION TO BIDIRECTIONAL RNN', 'context_before': 'recurrence, the subscript of S, z, y, q, k, v should be flipped by the rule of i := L − i + 1. The final output is the addition of the forward and the backward recurrences, i.e., yi = yF i , ∀i ∈ [L]. ', 'modified_lines': 'While a1 and a2 in Eq. (6) represents the attention without the mask, b1 and b2 in Eq. (7) corresponds to masked attention. Moreover, λi corresponds to the scalar version of Λi in Eq. (5a). We show in Figure 2 that this recurrence does not equal the attention matrix, regardless of whether scaling is applied before (6) or after (7) the mask, as the naive addition of two linear recurrent models for forward and backward recurrences fails to produce an attention matrix (more details of proofs are at Appendix C.1). These key differences can be described as follows: (i) The diagonal elements representing attention for each token appear in both recurrences, leading to twice the attention score for a token and itself compared to others. (ii) Causal (forward recurrence) and non-causal (backward ', 'original_lines': '1However, softmax based attention due to applying non-linearity into the attention formulation can not be linearized in this form ', 'after_paragraph_idx': None, 'before_paragraph_idx': 20}, {'section': 'Abstract', 'after_section': None, 'context_after': 'recurrence) attention scores are scaled individually, resulting in tokens not being properly scaled relative to the keys of other tokens in the sequence, unlike attention shown in Figure 2, parts a2 and b2. We precede our main result with a proposition from Sun et al. (2023), which states that an autoregres- sive transformer can be expressed as a linear recurrent model: Proposition 3.2. Considering the following forward recurrence: i = λiSF ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'for the causal (forward recurrence), ', 'modified_lines': '', 'original_lines': 'While a1 and a2 in Eq. (6) represents the attention without the mask, b1 and b2 in Eq. (7) corresponds to masked attention. Moreover, λi corresponds to the scalar version of Λi in Eq. (5a). We show in Figure 2 that this recurrence does not equal the attention matrix, regardless of whether scaling is applied before (6) or after (7) the mask, as the naive addition of two linear recurrent models for forward and backward recurrences fails to produce an attention matrix (more details of proofs are at Appendix C.1). These key differences can be described as follows: (i) The diagonal elements representing attention for each token appear in both recurrences, leading to twice the attention score for a token and itself compared to others. (ii) Causal (forward recurrence) and non-causal (backward ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 ', 'after_section': '1 ', 'context_after': 'where we use for the diagonal elements of the attention matrix and the mask. By splitting (10) into upper and lower triangular forms, we obtain the following: for lower triangular elements, and 5 216 217 ', 'paragraph_idx': 30, 'before_section': '1 ', 'context_before': '(10) ', 'modified_lines': 'for upper triangular elements, 2 q⊤ 1 k1 1 k1 q⊤ q⊤ · · · q⊤ 2 q⊤ 1 kL 1 kL 1 k2 1 k1 1 k2 q⊤ q⊤ · · · 1 1 (cid:124) λ2 1 λ2 ... 1 λ1 λ1λ2 ... (cid:124) 2 k1 q⊤ q⊤ 2 k2 ... ... · · · q⊤ 2 kL ... . . . L k1 q⊤ q⊤ · · · q⊤ L kL L k2 (cid:123)(cid:122) A = QK⊤ λ2λ3 · · · λ2 · · · λL = (cid:125) q⊤ 2 k1 ... q⊤ L k1 λ3 1 ... · · · λ3 · · · λL · · · λ4 · · · λL . . . ... + (cid:125) (cid:124) 1 2 q⊤ 2 k2 ... q⊤ L k2 (cid:123)(cid:122) AF . . . · · · 1 1 λ1 1 2 q⊤ L kL λ1λ2 ... λ2 ... λL−1 · · · λ1 λL−1 · · · λ2 λL−1 · · · λ3 1 ... (cid:123)(cid:122) MF (cid:124) (cid:125) = (cid:124) λL−1 · · · λ1 λL−1 · · · λ2 λL−1 · · · λ3 · · · 1 (cid:123)(cid:122) M 1 2 q⊤ 2 k2 · · · . . . q⊤ 2 kL ... 1 2 q⊤ L kL + (cid:125) (cid:124) . . . · · · 1 (cid:123)(cid:122) AB 1 λ2 λ2λ3 1 λ3 1 (cid:125) · · · λ2 · · · λL · · · λ3 · · · λL · · · λ4 · · · λL . . . ... 1 (cid:123)(cid:122) MB (cid:125) (11) −I (12) ', 'original_lines': 'for upper triangular elements, ', 'after_paragraph_idx': 30, 'before_paragraph_idx': 30}, {'section': 'Abstract', 'after_section': None, 'context_after': 'As in (11) and (12), the attention matrix and mask are split into lower (AF , MF ) and upper triangular (AB, MB) matrices. The scaling operator divides each row of the attention matrix to its summed value, and hence equals to a diagonal matrix C−1 multiplied by the attention: ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2025 ', 'modified_lines': '', 'original_lines': ' (cid:124) 1 k1 q⊤ q⊤ 1 k2 · · · q⊤ 1 kL 2 k1 q⊤ q⊤ 2 k2 ... ... · · · q⊤ 2 kL ... . . . L k1 q⊤ q⊤ · · · q⊤ L kL L k2 (cid:123)(cid:122) A = QK⊤ λ2λ3 · · · λ2 · · · λL = 1 2 q⊤ 1 k1 q⊤ 2 k1 ... q⊤ L k1 (cid:125) 1 2 q⊤ 2 k2 ... q⊤ L k2 (cid:123)(cid:122) AF . . . · · · 1 1 λ1 λ3 1 ... · · · λ3 · · · λL · · · λ4 · · · λL . . . ... (cid:124) (cid:125) = (cid:124) + (cid:125) (cid:124) 1 2 q⊤ L kL 1 2 q⊤ 1 k1 q⊤ 1 k2 1 2 q⊤ 2 k2 · · · · · · . . . q⊤ 1 kL q⊤ 2 kL ... 1 2 q⊤ L kL (cid:125) (cid:123)(cid:122) AB 1 λ2 λ2λ3 1 λ3 1 + (cid:125) (cid:124) . . . · · · 1 · · · λ2 · · · λL · · · λ3 · · · λL · · · λ4 · · · λL . . . ... 1 (cid:123)(cid:122) MB (11) −I (12) (cid:125) λ1λ2 ... λ2 ... λL−1 · · · λ1 λL−1 · · · λ2 λL−1 · · · λ3 1 ... (cid:123)(cid:122) MF 1 λ1 λ1λ2 ... λ2 1 λ2 ... (cid:124) λL−1 · · · λ1 λL−1 · · · λ2 λL−1 · · · λ3 · · · 1 (cid:123)(cid:122) M ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '12 q⊤q⊤', 'after_section': None, 'context_after': '6 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 Under review as a conference paper at ICLR 2025 SF/B i ', 'paragraph_idx': 43, 'before_section': '12 q⊤q⊤', 'context_before': 'The equations above are the exact representations for the forward pass, as shown in (16), but with the tokens in reverse order. The matrices AB and MB are also modified to match the final flipped output using flipped input values V using functions F (X) = JLXJL and FLIP(X) = JLX, where JL is ', 'modified_lines': 'an L-dimensional exchange matrix, as detailed in Appendix C.4. Thus, the outputs of the forward and backward recurrences can be expressed as follows: Y = (CF + CB)−1( YF + YB ), where (18) YF = (AF ⊙ MF )V, YB = (AB ⊙ MB)V = FLIP (cid:16)(cid:0)F (AB) ⊙ F (MB)(cid:1)FLIP(V) (cid:17) . (19) Theorem 3.3. (LION) Since (18) is the vectorized form of the recurrence presented in (3.2), we can therefore express the equivalent recurrence for the scaled attention as follows: 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 ', 'original_lines': ' an L-dimensional exchange matrix, as detailed in Appendix C.4. Thus, the outputs of the forward and backward recurrences can be expressed as follows: Y = (CF + CB)−1( YF + YB ), where (18) YF = (AF ⊙ MF )V, YB = (AB ⊙ MB)V = FLIP (cid:16)(cid:0)F (AB) ⊙ F (MB)(cid:1)FLIP(V) (cid:17) . (19) Theorem 3.3. (LION) Since (18) is the vectorized form of the recurrence presented in (3.2), we can therefore express the equivalent recurrence for the scaled attention as follows: ', 'after_paragraph_idx': None, 'before_paragraph_idx': 43}, {'section': 'Abstract', 'after_section': None, 'context_after': 'This section outlines the selectivity for the bidirectional recurrent model and proposes LION-S. As shown in Dao & Gu (2024), transformers can be represented as SSMs through a state-space duality, where the parameters Ci and Bi in the SSM correspond to qi and ki. However, this connection ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'i 4 LION-S: SELECTIVITY INSPIRED FROM CONTINUOUS SYSTEMS ', 'modified_lines': ' ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 LION-S: SELECTIVITY INSPIRED FROM CONTINUOUS SYSTEMS', 'after_section': '4 LION-S: SELECTIVITY INSPIRED FROM CONTINUOUS SYSTEMS', 'context_after': 'where exp(·) is applied element-wise and M can be learned with trainable ai or with selectivity as ai = log(σ(w⊤ a xi + b)), where σ is the sigmoid function. The parameter ai can also be treated as a vector, allowing it to be multiplied with the Hadamard product on the state Si, as discussed in 7 Under review as a conference paper at ICLR 2025 ', 'paragraph_idx': 47, 'before_section': None, 'context_before': 'if i > j if i < j if i = j ', 'modified_lines': ' , M = exp(D), (28) C.7. Adding the selective parameter ai into the LION framework introduces LION-S a bidirectional selective transformer with recurrence inference. Importantly, due to the use of the recurrence parameter ai, LION-S does not require any additional positional encoding, enabling it to extrapolate beyond the context length or resolution during inference. In addition to its connection to continuous systems, the matrix D can be computed using a prefix sum algorithm (Blelloch, 1990), allowing for the summation of ai values in O(log(L)) time, after which it can be exponentiated to derive the mask M. Note that, as the same parameter eai − 1 has appeared in (27a) and (27b), we can consider this term as a part of ki. ', 'original_lines': ' 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 ', 'after_paragraph_idx': 47, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '5 EXPERIMENTS This section illustrates the performance of LION-LIT and -S on well-established benchmarks: Long Range Arena, masked language modelling, and image classification. Note that thanks to Theorem 3.3, LION-S benefits from the parallelization capabilities built for masked attention during training. We ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '50.41 86.07 ', 'modified_lines': '', 'original_lines': 'C.7. Adding the selective parameter ai into the LION framework introduces LION-S a bidirectional selective transformer with recurrence inference. Importantly, due to the use of the recurrence parameter ai, LION-S does not require any additional positional encoding, enabling it to extrapolate beyond the context length or resolution during inference. In addition to its connection to continuous systems, the matrix D can be computed using a prefix sum algorithm (Blelloch, 1990), allowing for the summation of ai values in O(log(L)) time, after which it can be exponentiated to derive the mask M. Note that, as the same parameter eai − 1 has appeared in (27a) and (27b), we can consider this term as a part of ki. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '5.2 MASKED LANGUAGE MODELLING We assess BERT, LION-S, and a Linear Attention variant of BERT combined with our bidirectional ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'discussion on the choice of non-linearity, scaling, and dimensions of parameters is presented in Appendix D.2 and D.3. For more information on the LRA benchmarks, see Appendix D.1. ', 'modified_lines': '', 'original_lines': '8 Under review as a conference paper at ICLR 2025 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 Table 3: C4 Masked Language Modelling and GLUE results. For each column (dataset), the best and the second best results for each model size are highlighted with bold and underline respectively. Model MLM Acc. MNLI RTE QQP QNLI SST2 STSB MRPC COLA Avg. BERTLARGE LION-LIT LARGE LION-RETNET LION-S LARGE 69.88 67.11 68.64 69.16 85.68 83.73 83.82 84.38 67.44 57.18 60.72 57.69 89.90 89.85 89.72 89.57 91.89 89.93 89.79 90.30 93.04 91.86 92.93 92.93 88.63 88.02 87.29 87.68 90.89 90.18 89.66 90.57 56.14 55.36 56.83 59.54 82.95 80.76 81.34 81.58 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 EXPERIMENTSThis section illustrates the performance of LION-LIT and -S on well-established benchmarks: LongRange Arena, masked language modelling, and image classification. Note that thanks to Theorem 3.3,LION-S benefits from the parallelization capabilities built for masked attention during training. Wesimilarly achieve efficient inference through the bidirectional recurrence as also illustrated by Figure1. Due to the use of ai from (28), LION-S does not require positional encodings and can extrapolatebeyond context length during inference, which we will also demonstrate below.', 'after_section': '5 EXPERIMENTSThis section illustrates the performance of LION-LIT and -S on well-established benchmarks: LongRange Arena, masked language modelling, and image classification. Note that thanks to Theorem 3.3,LION-S benefits from the parallelization capabilities built for masked attention during training. Wesimilarly achieve efficient inference through the bidirectional recurrence as also illustrated by Figure1. Due to the use of ai from (28), LION-S does not require positional encodings and can extrapolatebeyond context length during inference, which we will also demonstrate below.', 'context_after': 'Table 4: Image classification task results. We present the Top-1 accuracy on the validation data. LION-S shows competitive performance against ViT models. * indicates that results are directly copied from paper Zhu et al. (2024b), where the authors are training under a different setup (e.g., ', 'paragraph_idx': 60, 'before_section': None, 'context_before': 'the softmax, added the nonlinearity ϕ(x) = elu(x) + 1, and scaling. For Hydra we consider the original hyperparameters and we modify the number of layers to match the parameter size of ViT-T. ', 'modified_lines': 'Table 4 presents the Top-1 accuracy of models on each dataset. While LION-LIT and LION-S have the same complexity, LION-S architecture significantly outperforms the LION-LIT model in each task. ViT-T, which has a quadratic complexity performs slightly better than LION-S on ImageNet and worse in other datasets. When considering a larger ViT-Small (21.7M) parameters, this gap between trans- former and LION-S on ImageNet gets smaller. LION-S (v2) significantly improves the performance of LION-S in all tested scenarios and over ViT-T on CIFAR-100. For further ablations, cf., Appendix D.7. ', 'original_lines': '', 'after_paragraph_idx': 61, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '5.4 CONTEXT EXTENSION AND MEMORY DURING INFERENCE ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'of 4 (Right). While ViT goes out of memory (OOM) for resolution 1248, LION-S only needs ∼ 6 GB which is ∼ 94.4% more efficient. Similarly, BERT goes OOM for sequence length 14, 336, while for the same sequence length, LION-S requires less than 15GB of GPU memory. ', 'modified_lines': '', 'original_lines': ' Table 4 presents the Top-1 accuracy of models on each dataset. While LION-LIT and LION-S have the same complexity, LION-S architecture significantly outperforms the LION-LIT model in each task. ViT-T, which has a quadratic complexity performs slightly better than LION-S on ImageNet and worse in other datasets. When considering a larger ViT-Small (21.7M) parameters, this gap between trans- former and LION-S on ImageNet gets smaller. LION-S (v2) significantly improves the performance of LION-S in all tested scenarios and over ViT-T on CIFAR-100. For further ablations, cf., Appendix D.7. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '754 755 ', 'modified_lines': '', 'original_lines': 'Shengjie Luo, Shanda Li, Tianle Cai, Di He, Dinglan Peng, Shuxin Zheng, Guolin Ke, Liwei Wang, and Tie-Yan Liu. Stable, fast and accurate: Kernelized attention with relative positional encoding. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, and Luke Zettle- moyer. Luna: Linear unified nested attention. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer. Mega: moving average equipped gated attention. arXiv preprint arXiv:2209.10655, 2022. Antonio Orvieto, Samuel L Smith, Albert Gu, Anushan Fernando, Caglar Gulcehre, Razvan Pascanu, and Soham De. Resurrecting recurrent neural networks for long sequences. In International Conference on Machine Learning, pp. 26670–26698. PMLR, 2023. Bo Peng, Daniel Goldstein, Quentin Anthony, Alon Albalak, Eric Alcaide, Stella Biderman, Eugene Cheah, Teddy Ferdinan, Haowen Hou, Przemysław Kazienko, et al. Eagle and finch: Rwkv with matrix-valued states and dynamic recurrence. arXiv preprint arXiv:2404.05892, 2024. Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A Smith, and Lingpeng Kong. Random feature attention. arXiv preprint arXiv:2103.02143, 2021. Reiner Pope, Sholto Douglas, Aakanksha Chowdhery, Jacob Devlin, James Bradbury, Jonathan Heek, Kefan Xiao, Shivani Agrawal, and Jeff Dean. Efficiently scaling transformer inference. Proceedings of Machine Learning and Systems, 5:606–624, 2023. Zhen Qin, Xiaodong Han, Weixuan Sun, Dongxu Li, Lingpeng Kong, Nick Barnes, and Yiran Zhong. The devil in linear transformer. arXiv preprint arXiv:2210.10340, 2022. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp. 8748–8763. PMLR, 2021. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. 115(3):211–252, 2015. Imanol Schlag, Kazuki Irie, and J¨urgen Schmidhuber. Linear transformers are secretly fast weight programmers. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp. 9355–9366. PMLR, 2021. Jimmy T. H. Smith, Andrew Warrington, and Scott W. Linderman. Simplified state space layers for sequence modeling. In International Conference on Learning Representations (ICLR 2023), 2023. Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing, 568:127063, 2024. Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, and Furu Wei. Retentive network: A successor to transformer for large language models. arXiv preprint arXiv:2307.08621, 2023. Yi Tay, Dara Bahri, Liu Yang, Donald Metzler, and Da-Cheng Juan. Sparse sinkhorn attention. CoRR, abs/2002.11296, 2020a. Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Donald Metzler. Long range arena: A benchmark for efficient transformers. arXiv preprint arXiv:2011.04006, 2020b. 14 Under review as a conference paper at ICLR 2025 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '(a) Latency ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '861 862 863 ', 'modified_lines': '', 'original_lines': ' 01000200030004000SequenceLength101520253035PerplexityGPT-2SequenceLength18.25Avg.LION-S(1D)GPT-2LinAtt Under review as a conference paper at ICLR 2025 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 EXPERIMENTSThis section illustrates the performance of LION-LIT and -S on well-established benchmarks: LongRange Arena, masked language modelling, and image classification. Note that thanks to Theorem 3.3,LION-S benefits from the parallelization capabilities built for masked attention during training. Wesimilarly achieve efficient inference through the bidirectional recurrence as also illustrated by Figure1. Due to the use of ai from (28), LION-S does not require positional encodings and can extrapolatebeyond context length during inference, which we will also demonstrate below.', 'after_section': None, 'context_after': '02000400060008000Sequencelength0.000.250.500.751.001.25Nexttokenlatency(s)RecurrenceAttentionAttention+KVCache02000400060008000SequenceLength810121416Memory(GB)16.837.906.956.95RecurrenceAttentionAttention+KVCache Under review as a conference paper at ICLR 2025 ', 'paragraph_idx': 58, 'before_section': None, 'context_before': 'to Y = softmax (cid:0)QK⊤(cid:1) V or by using techniques like parallel scan, as utilized by many SSMs (e.g., Mamba, S5) (Blelloch, 1990). We will cover both techniques in the following sections. ', 'modified_lines': '16 ', 'original_lines': '17 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2025 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '1185 1186 1187 ', 'modified_lines': '', 'original_lines': ' Under review as a conference paper at ICLR 2025 This recurrence is the same as recurrence (68) but with zL being fixed to the summation of all keys in the sequence, therefor the output yi can simply be written as: yi = i ((cid:80)i q⊤ j=1 Mijkjv⊤ j ) q⊤ i zL , Mij = k=iλk (cid:26)Πj+1 0 i ≥ j i < j (74) By replacing the zi = (cid:80)i form, it will become: j=1 kj in the denominator of equation (70) with zL. Therefore in vectorized Y = (AC ⊙ M(cid:1)V (75) With AC being: AC = q⊤ q⊤ q⊤ q⊤ q⊤ q⊤ 1 k1 1 zL 2 k1 2 zL 3 k1 3 zL ... q⊤ q⊤ q⊤ q⊤ 2 k2 2 zL 3 k2 3 zL ... q⊤ L k1 q⊤ L zL q⊤ L k2 q⊤ L zL . . . q⊤ L kL q⊤ L zL q⊤ q⊤ 3 k3 3 zL ... · · · Importantly this equation can be written as: Y = (cid:0)SCALE(QK⊤) ⊙ M(cid:1)V (76) which despite equation (66) scaling is applied over the whole sequence not for the causal part of the sequence. The matrix AC is helpful for driving the recurrent version of LION for Forward and Backward recurrences and the mask here M is equal to LION’s forward mask MF in equation (16). As shown in (16) the forward recurrence for the causal part of the attention can be presented as YB = AF ⊙ MF the matrix AF can be created simply by using matrix AC as bellow: q⊤ 1 k1 1 q⊤ 2 1 zL q⊤ 2 k1 q⊤ 2 zL q⊤ 3 k1 q⊤ 3 zL ... q⊤ L k1 q⊤ L zL (cid:124) q⊤ 2 k2 1 q⊤ 2 2 zL q⊤ 3 k2 q⊤ 3 zL ... q⊤ L k2 q⊤ L zL 1 2 (cid:123)(cid:122) AF Or equivalently: q⊤ q⊤ 3 k3 3 zL ... · · · = (cid:125) (cid:124) . . . q⊤ L kL q⊤ L zL 1 2 q⊤ q⊤ q⊤ q⊤ q⊤ q⊤ 1 k1 1 zL 2 k1 2 zL 3 k1 3 zL ... q⊤ q⊤ q⊤ q⊤ 2 k2 2 zL 3 k2 3 zL ... q⊤ L k1 q⊤ L zL q⊤ L k2 q⊤ L zL (cid:123)(cid:122) AC q⊤ q⊤ 3 k3 3 zL ... · · · 1 2 q⊤ q⊤ 1 k1 1 zL 1 2 q⊤ q⊤ 2 k2 2 zL − (cid:125) (cid:124) . . . q⊤ L kL q⊤ L zL 1 2 q⊤ q⊤ 3 k3 3 zL . . . (cid:123)(cid:122) DF YF = AF ⊙ MF = (AC − DF ) ⊙ MF (cid:125) 1 2 q⊤ q⊤ L kL L zL (77) Since the diagonal values of the mask MF are all ones and the matrix DF is diagonal, we have: YF = (AC − DF ) ⊙ MF = AC⊙MF − DF (78) As AC ⊙ MF corresponds to linear recurrence shown at (74). The vectorized form (78) can be presented as linear recurrence: yi = i ((cid:80)i q⊤ j=1 Mijkjv⊤ j ) q⊤ i zL − 1 2 q⊤ i ki q⊤ i zL , Mij = k=iλk (cid:26)Πj+1 0 i ≥ j i < j (79) 23 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '. . . ... ', 'modified_lines': '', 'original_lines': ' λL−1 · · · λ1 λL−1 · · · λ2 λL−1 · · · λ3 · · · ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '1350 1351 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2025 ', 'modified_lines': '', 'original_lines': 'C.4 FLIPPING OPERATION IN BACKWARD RECURRENCE Here we define the operation which flip the matrices AB, MB for the reverse reccurence th goal is to find the F (.) such that: 1 2 q⊤ 1 k1 q⊤ 1 k2 1 2 q⊤ 2 k2 AB = · · · · · · . . . q⊤ 1 kL q⊤ 2 kL ... 1 2 q⊤ L kL → F (AB) = q⊤ L kL 1 q⊤ 2 L zL q⊤ L−1kL q⊤ 2 zL ... q⊤ q⊤ 1 kL 1 zL 1 2 q⊤ L−1kL−1 q⊤ 2 zL ... q⊤ 1 kL−1 q⊤ 1 zL (98) . . . · · · 1 2 q⊤ q⊤ 1 k1 1 zL MB = 1 λ2 λ2λ3 · · · λ2 · · · λL 1 λ3 1 · · · λ3 · · · λL · · · λ4 · · · λL . . . ... 1 → F (MB) = The above can be achieved by: 1 λL λLλL−1 ... 1 λL−1 ... λL · · · λ2 λL · · · λ3 λL · · · λ4 1 ... (99) . . . · · · 1 F (A) = JLAJL, , JL = . . . 1 1 1 (100) C.5 MAPPING EXISTING AUTOREGRESSIVE MODELS INTO LION As noted, other autoregressive recurrent models can also be integrated into our bidirectional frame- work, benefiting from parallelization during training and fast bidirectional inference. Here, we demonstrate how to map several well-known linear recurrent models into the bidirectional form of LION, along with their corresponding masked attention matrix and inference linear recurrence. Linear Transformer (LION-LIT). According to Katharopoulos et al. (2020) the linear transformer has a recurrence: SF i = SF i = zF zF i−1 + kiv⊤ i , i−1 + ki, SCALED : yF i = NON-SCALED : yF ⊤SF qi i ⊤zF qi i ⊤SF i = qi i (101) (102) (103) (104) As observed, this is a special case of our bidirectional recurrence defined in (24) with λi = 1, as LION resembles the scaled masked attention. In the case of the linear transformer, we require attention without scaling for the recurrence. The vectorized form for the scaled version can then be derived easily as follows: 26 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'SF/B i zF/B ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '1402 1403 ', 'modified_lines': '', 'original_lines': ' Under review as a conference paper at ICLR 2025 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 EXPERIMENTSThis section illustrates the performance of LION-LIT and -S on well-established benchmarks: LongRange Arena, masked language modelling, and image classification. Note that thanks to Theorem 3.3,LION-S benefits from the parallelization capabilities built for masked attention during training. Wesimilarly achieve efficient inference through the bidirectional recurrence as also illustrated by Figure1. Due to the use of ai from (28), LION-S does not require positional encodings and can extrapolatebeyond context length during inference, which we will also demonstrate below.', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2025 ', 'paragraph_idx': 63, 'before_section': None, 'context_before': '2023), Mamba (Gu & Dao, 2024), Local Att. (Vaswani et al., 2017), Sparse Transformer (Child et al., 2019), Longformer (Beltagy et al., 2020), Linformer (Wang et al., 2020), Reformer (Kitaev et al., ', 'modified_lines': '28 ', 'original_lines': '29 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'We have observed that bounding the keys and queries significantly enhances the model’s ability to solve tasks. This finding is consistent with the observations in Yang et al. (2024). As demonstrated in ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '1617 1618 1619 ', 'modified_lines': '', 'original_lines': ' Under review as a conference paper at ICLR 2025 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 LION-S: SELECTIVITY INSPIRED FROM CONTINUOUS SYSTEMS', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2025 ', 'paragraph_idx': 51, 'before_section': None, 'context_before': 'hf-transformer-finetune-glue-bert-base-uncased.yaml ', 'modified_lines': '30 ', 'original_lines': '31 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Model ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '1725 1726 1727 ', 'modified_lines': '', 'original_lines': ' Under review as a conference paper at ICLR 2025 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-24 15:58:31
ICLR.cc/2025/Conference
jpCkNrtU48
BdCAhOg2ZG
[{'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'recurrent neural networks. LION is built upon a mathematical formulation where state-of-the-art models while delivering fast training and inference efficiency. 1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ABSTRACT ', 'modified_lines': 'We introduce LION, a novel sequence-to-sequence framework that unifies the bidi- rectionality and parallelized training of Transformers with the fast inference of full kernelized attention with a learnable mask is efficiently computed using a bidi- rectional selective recurrent model, matching the effectiveness of softmax-based at- tention with constant-time inference. Our framework naturally accounts for spatial and temporal relationships within input sequences, reducing reliance on heuristic positional embeddings and facilitating straightforward scalability in context length and resolution. Using our framework and inspired by the recent state-space mod- els, we propose three main running examples LION-LIT , LION-RETNET , and LION-S , a transformer with selective mask and recurrent inference. Numerical evaluations on tasks such as language modeling, the Long-Range Arena, and im- age classification show that LION framework achieves performance on par with ', 'original_lines': 'We introduce LION, a novel sequence-to-sequence framework that unifies the bidirectionality and parallelized training of Transformers with the fast inference of full kernelized attention with a learnable mask is efficiently computed using a bidirectional selective recurrent model, matching the effectiveness of softmax- based attention with constant-time inference. Our framework naturally accounts for spatial and temporal relationships within input sequences, reducing reliance on heuristic positional embeddings and facilitating straightforward scalability in context length and resolution. Using our framework and inspired by the recent state- space models, we propose three main running examples LION-LIT, LION-RETNET, and LION-S, a transformer with selective mask and recurrent inference. Numerical evaluations on tasks such as language modeling, the Long-Range Arena, and image classification show that LION framework achieves performance on par with ', 'after_paragraph_idx': 2, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'In this paper, we answer this question affirmatively. Indeed, we demonstrate that applying two linear attention mechanisms simply in opposite directions and then summing them does not recover the original bidirectional Transformer (cf., Observation 3.1). Instead, we propose a novel design, LION, that allows the bidirectional Transformer to be expressed as a bidirectional RNN. Our framework retains the advantages of parallel training found in Transformers, offering bidirectionality in inference 1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'efficiency of RNNs, a natural question arises: Is a bidirectional Transformer actually a bidirectional RNN? ', 'modified_lines': ' ', 'original_lines': 'while addressing the memory issues inherent in traditional Transformer models. A schematic of the proposed framework LION is visualized in Figure 1. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Table 1: Summary of training and inference strategies. ⇄ represents bidirectionality of the method. Complexity indicates the computational and memory requirements during inference for processing ', 'paragraph_idx': 6, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2025 ', 'modified_lines': ' while addressing the memory issues inherent in traditional Transformer models. A schematic of the proposed framework LION is visualized in Figure 1. Besides the popularity of Transformers and their variants, state space models (SSMs) have emerged as another family of architecture for sequence modeling due to their efficient inference capabilities (Gu et al., 2022; Smith et al., 2023; Gu et al., 2020). The representative works Mamba (Gu & Dao, 2024) and Mamba-2 (Dao & Gu, 2024) have also demonstrated strong performance in language modeling. Building on our bidirectional Transformer theory, LION framework combines the expressive power of bidirectional Transformers with the selective mechanism of Mamba, further enhancing the model’s capability to process long sequences while maintaining computational efficiency. Through this approach, we aim to provide a scalable and efficient solution for tasks that demand both long-range dependency modeling and dense information processing. Overall, our main contributions can be summarized as follows: • We propose a theoretical framework LION (Theorem 3.3), which expresses bidirectional Transformers as bidirectional RNNs, enabling efficient inference for long sequences while benefiting from well-established Transformer training (cf., Table 1). • Our theoretical framework offers the foundations to transform a wide class of autoregressive recurrent models (cf., Appendix B) into their bidirectional counterparts. • We propose three main running examples of our framework, inspired by prior work, namely: 1. LION-LIT : Scaled attention without masking, a bidirectional extension of Linear Transformer Katharopoulos et al. (2020). 2. LION-RETNET : Fixed masked scaled attention with scalar and learnable state param- eter γ, an extension of RETNET Sun et al. (2023) into the bidirectional setting. 3. LION-S : Selective masked scaled attention with input-dependent mask λi, inspired by the selectivity of Mamba-2 Dao & Gu (2024). • Through extensive experiments in the Long Range Arena, Vision Tasks, and Masked Language Modeling, we have demonstrated the capabilities of the LION framework and the models built upon it, as outlined above. Due to the space constraints, a detailed overview of related work is deferred to Appendix B. Section 2 in the sequel provides the necessary preliminaries on attention, state space model, and linear recurrent network. Section 3 then explains our framework LION, and mathematically grounds our concrete contributions. Section 4 describes how to build LION-S by introducing selectivity via discretization of continuous state-space models, which is then followed by numerical evidence in Section 5 and the conclusions in Section 6. 2 PRELIMINARIES AND BACKGROUND Notation. Matrices (vectors) are symbolized by uppercase (lowercase) boldface letters, e.g., Y and y. The Hadamard product is denoted by ⊙ and ∗ signifies the scalar product. Attention. Attention have been a cornerstone of foundation models for several years (Vaswani et al., 2017; Kojima et al., 2022). Given a data sequence x1, x2, . . . , xL, a single-head softmax-attention uses a softmax function to define the attention weights: (qi, ki, vi) = (Wqxi, Wkxi, Wvxi) , yi = i (cid:88) j=1 exp(q⊤ i kj) p=1 exp(q⊤ i kp) (cid:80)i vj, (1) where qi, ki, vi, yi ∈ Rd and the weights Wq, Wk, Wv ∈ Rd×d with d being the projection dimension. With Q := [q1, . . . , qL]⊤, K := [k1, . . . , kL]⊤, V := [v1, . . . , vL]⊤ ∈ RL×d, we can then express the attention as the following matrix form: Y = softmax (cid:0)QK⊤(cid:1) V. Such matrix form is crucial for parallelized training over the sequence length. In contrast, (1) is used during inference for generating or processing tokens. However, for autoregressive transformers (Kojima et al., 2022), employing (1) requires storing the previous L tokens to attend to the latest token during inference. This approach is less efficient than RNNs, where only the state is stored regardless of the previous sequence (cf., Orvieto et al. (2023)). Attention can be generalized via a kernel function κ : Rd × Rd → R (Tsai et al., 2019) as 2 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 Under review as a conference paper at ICLR 2025 Figure 1: (Left) standard Transformer block. (Middle) training mode of LION with the bidirectional Transformer. (Right) inference mode of LION with the bidirectional RNN. Norm refers to Layer normalization, Proj is the projection operation to calculate Q, K, V and λ values, Scale is the scaling operation in Eq. (4), Inv is the inversion operation, A is the linear attention matrix, A = QKT , MF/B are forward/backward recurrence masks, yF/B are forward/backward outputs and cF/B are forward/backward are the scaling coefficients. For further definitions of the architectural elements in LION, please refer to Sections 2 and 3. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 PRELIMINARIES AND BACKGROUNDNotation. Matrices (vectors) are symbolized by uppercase (lowercase) boldface letters, e.g., Y andy. The Hadamard product is denoted by ⊙ and ∗ signifies the scalar product.', 'after_section': None, 'context_after': 'O(L) O(L) ', 'paragraph_idx': 9, 'before_section': '2 PRELIMINARIES AND BACKGROUNDNotation. Matrices (vectors) are symbolized by uppercase (lowercase) boldface letters, e.g., Y andy. The Hadamard product is denoted by ⊙ and ∗ signifies the scalar product.', 'context_before': 'Parallel Scan Attention Attention ', 'modified_lines': 'Attention ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 9}, {'section': 'Abstract', 'after_section': None, 'context_after': '3 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'κ(qi, kj) = exp(q⊤ a higher-dimensional space. For simplicity of notation, we use qi := ϕ(Wqxi) and similarly for ki := ϕ(Wkxi) in the sequel. This approach enables the transformer to be framed as an RNN with ', 'modified_lines': '', 'original_lines': 'linear recurrence 1, as shown in (4). This formulation eliminates the need to store previous tokens during inference, while still maintaining a parallelized form for training. State Space Models. Inspired by continuous-time systems, state space models (SSMs) have emerged as alternative sequence models. These models project tokens into a state space representation, and 1However, softmax based attention due to applying non-linearity into the attention formulation can not be linearized in this form ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'learn the discretized parameters ( ¯A, ¯B, and ¯C) of the continuous SSM (A(t), B(t), and C(t)) (Gu et al., 2022; Smith et al., 2023; Gu et al., 2020). Recent SSMs designed for language modeling, such as Mamba (Gu & Dao, 2024), use input-dependent matrices ¯Ai, ¯Bi, and ¯Ci, showing strong ', 'paragraph_idx': 8, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2025 ', 'modified_lines': 'linear recurrence 1, as shown in (4). This formulation eliminates the need to store previous tokens during inference, while still maintaining a parallelized form for training. State Space Models. Inspired by continuous-time systems, state space models (SSMs) have emerged as alternative sequence models. These models project tokens into a state space representation, and ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 PRELIMINARIES AND BACKGROUNDNotation. Matrices (vectors) are symbolized by uppercase (lowercase) boldface letters, e.g., Y andy. The Hadamard product is denoted by ⊙ and ∗ signifies the scalar product.', 'after_section': None, 'context_after': '(5a) (5b) ', 'paragraph_idx': 14, 'before_section': '2 PRELIMINARIES AND BACKGROUNDNotation. Matrices (vectors) are symbolized by uppercase (lowercase) boldface letters, e.g., Y andy. The Hadamard product is denoted by ⊙ and ∗ signifies the scalar product.', 'context_before': 'SCALED : yi = ', 'modified_lines': '⊤Si ', 'original_lines': 'qi qi ', 'after_paragraph_idx': None, 'before_paragraph_idx': 13}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Observation 3.1. First, we observe that the combination of the forward and backward recurrences of the linear recurrent model cannot yield the attention. Consider the following bidirectional recurrence equations: ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'setting with equivalence to the scaled attention. We introduce LION, a bidirectional sequence-to- sequence framework equivalent to attention that benefits from attention parallelization during training and achieves fast linear recurrence during inference. ', 'modified_lines': ' ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'recurrence) attention scores are scaled individually, resulting in tokens not being properly scaled relative to the keys of other tokens in the sequence, unlike attention shown in Figure 2, parts a2 and b2. We precede our main result with a proposition from Sun et al. (2023), which states that an autoregres- sive transformer can be expressed as a linear recurrent model: Proposition 3.2. Considering the following forward recurrence: i = λiSF ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'are at Appendix C.1). These key differences can be described as follows: (i) The diagonal elements representing attention for each token appear in both recurrences, leading to twice the attention score for a token and itself compared to others. (ii) Causal (forward recurrence) and non-causal (backward ', 'modified_lines': ' ', 'original_lines': ' 4 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 Under review as a conference paper at ICLR 2025 Figure 2: Differences between attention and the addition of two linear recurrent models. a1) Addition of two linear transformers, a2) Attention with scaling, b1) Addition of two linear recurrent models, b2) Masked attention with scaling. The red text highlights the differences between attention and the summed recurrent models. We use for the non-causal (backward recurrence), and for the diagonal part of the attention. for the causal (forward recurrence), ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 PRELIMINARIES AND BACKGROUNDNotation. Matrices (vectors) are symbolized by uppercase (lowercase) boldface letters, e.g., Y andy. The Hadamard product is denoted by ⊙ and ∗ signifies the scalar product.', 'after_section': None, 'context_after': 'Our goal (SCALE(QK⊤ ⊙ M)), as this framework is more generalized and can be adapted to vari- ous linear recurrent models (more detail on different variation like scaling prior to masking SCALE(QK⊤) ⊙ M are provided at Appendix C.1). Motivated by (9) and the observation ', 'paragraph_idx': 11, 'before_section': None, 'context_before': 'is to derive a bidirectional ', 'modified_lines': 'recurrence for attention with scaling and ', 'original_lines': 'recurrence for attention with scaling and ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'SF/B i zF/B ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Theorem 3.3. (LION) Since (18) is the vectorized form of the recurrence presented in (3.2), we can therefore express the equivalent recurrence for the scaled attention as follows: ', 'modified_lines': '', 'original_lines': '6 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 Under review as a conference paper at ICLR 2025 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 LION: EXPANDING FULL ATTENTION TO BIDIRECTIONAL RNN', 'after_section': None, 'context_after': 'yF i i + cB cF ', 'paragraph_idx': 21, 'before_section': None, 'context_before': '1 2 ', 'modified_lines': 'i + yB ', 'original_lines': 'i + yB ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'This section outlines the selectivity for the bidirectional recurrent model and proposes LION-S. As shown in Dao & Gu (2024), transformers can be represented as SSMs through a state-space duality, where the parameters Ci and Bi in the SSM correspond to qi and ki. However, this connection ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'i 4 LION-S: SELECTIVITY INSPIRED FROM CONTINUOUS SYSTEMS ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 LION-S: SELECTIVITY INSPIRED FROM CONTINUOUS SYSTEMSThis section outlines the selectivity for the bidirectional recurrent model and proposes LION-S. Asshown in Dao & Gu (2024), transformers can be represented as SSMs through a state-space duality,where the parameters Ci and Bi in the SSM correspond to qi and ki. However, this connectionwas established in the discrete domain. In our work, we explore the transformer recurrence withscaling in the continuous domain before discretizing it, which leads to the recurrence parameterλi. By considering the transformer recurrence in the continuous domain and applying zero-orderhold discretization (Kalman, 1960), we obtain', 'after_section': None, 'context_after': ' 0 ', 'paragraph_idx': 49, 'before_section': None, 'context_before': 'k=i ak k=i+1 ak ', 'modified_lines': 'Dij = ', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 qi', 'after_section': None, 'context_after': '5 EXPERIMENTS This section illustrates the performance of LION-LIT and -S on well-established benchmarks: Long Range Arena, masked language modelling, and image classification. Note that thanks to Theorem 3.3, ', 'paragraph_idx': 46, 'before_section': None, 'context_before': '50.41 86.07 ', 'modified_lines': 'a vector, allowing it to be multiplied with the Hadamard product on the state Si, as discussed in C.7. Adding the selective parameter ai into the LION framework introduces LION-S a bidirectional selective transformer with recurrence inference. Importantly, due to the use of the recurrence parameter ai, LION-S does not require any additional positional encoding, enabling it to extrapolate beyond the context length or resolution during inference. In addition to its connection to continuous systems, the matrix D can be computed using a prefix sum algorithm (Blelloch, 1990), allowing for the summation of ai values in O(log(L)) time, after which it can be exponentiated to derive the mask M. Note that, as the same parameter eai − 1 has appeared in (27a) and (27b), we can consider this term as a part of ki. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'for stability based on our results in Appendix D.6. For additional experimental details and results with smaller scaled models, we refer to Appendix D.5 and Appendix D.4 respectively. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'their downstream performance on the GLUE benchmark (Wang et al., 2018). Both the pre-training and fine-tuning phases employ the M2 hyperparameters (Fu et al., 2023), except for the LARGE models, where learning rates of 2 · 10−4 and 10−5 for pretraining and finetuning were employed ', 'modified_lines': '', 'original_lines': ' 8 Under review as a conference paper at ICLR 2025 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 Table 3: C4 Masked Language Modelling and GLUE results. For each column (dataset), the best and the second best results for each model size are highlighted with bold and underline respectively. Model MLM Acc. MNLI RTE QQP QNLI SST2 STSB MRPC COLA Avg. BERTLARGE LION-LIT LARGE LION-RETNET LION-S LARGE 69.88 67.11 68.64 69.16 85.68 83.73 83.82 84.38 67.44 57.18 60.72 57.69 89.90 89.85 89.72 89.57 91.89 89.93 89.79 90.30 93.04 91.86 92.93 92.93 88.63 88.02 87.29 87.68 90.89 90.18 89.66 90.57 56.14 55.36 56.83 59.54 82.95 80.76 81.34 81.58 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Table 4: Image classification task results. We present the Top-1 accuracy on the validation data. LION-S shows competitive performance against ViT models. * indicates that results are directly ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Table 4 presents the Top-1 accuracy of models on each dataset. While LION-LIT and LION-S have the same complexity, LION-S architecture significantly outperforms the LION-LIT model in each task. ', 'modified_lines': '', 'original_lines': 'ViT-T, which has a quadratic complexity performs slightly better than LION-S on ImageNet and worse in other datasets. When considering a larger ViT-Small (21.7M) parameters, this gap between trans- former and LION-S on ImageNet gets smaller. LION-S (v2) significantly improves the performance of LION-S in all tested scenarios and over ViT-T on CIFAR-100. For further ablations, cf., Appendix D.7. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 EXPERIMENTSThis section illustrates the performance of LION-LIT and -S on well-established benchmarks: LongRange Arena, masked language modelling, and image classification. Note that thanks to Theorem 3.3,LION-S benefits from the parallelization capabilities built for masked attention during training. Wesimilarly achieve efficient inference through the bidirectional recurrence as also illustrated by Figure1. Due to the use of ai from (28), LION-S does not require positional encodings and can extrapolatebeyond context length during inference, which we will also demonstrate below.', 'after_section': '5 EXPERIMENTSThis section illustrates the performance of LION-LIT and -S on well-established benchmarks: LongRange Arena, masked language modelling, and image classification. Note that thanks to Theorem 3.3,LION-S benefits from the parallelization capabilities built for masked attention during training. Wesimilarly achieve efficient inference through the bidirectional recurrence as also illustrated by Figure1. Due to the use of ai from (28), LION-S does not require positional encodings and can extrapolatebeyond context length during inference, which we will also demonstrate below.', 'context_after': '5.4 CONTEXT EXTENSION AND MEMORY DURING INFERENCE ', 'paragraph_idx': 60, 'before_section': None, 'context_before': 'of 4 (Right). While ViT goes out of memory (OOM) for resolution 1248, LION-S only needs ∼ 6 GB which is ∼ 94.4% more efficient. Similarly, BERT goes OOM for sequence length 14, 336, while for the same sequence length, LION-S requires less than 15GB of GPU memory. ', 'modified_lines': ' ViT-T, which has a quadratic complexity performs slightly better than LION-S on ImageNet and worse in other datasets. When considering a larger ViT-Small (21.7M) parameters, this gap between trans- former and LION-S on ImageNet gets smaller. LION-S (v2) significantly improves the performance of LION-S in all tested scenarios and over ViT-T on CIFAR-100. For further ablations, cf., Appendix D.7. ', 'original_lines': '', 'after_paragraph_idx': 60, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '3 · 10−6 3 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '3 10−5 ', 'modified_lines': '', 'original_lines': '1 · 10−6 10 3 · 10−5 3 · 10−6 5 8 · 10−5 5 · 10−6 10 10−5 3 · 10−6 5 10−5 5 · 10−6 3 8 · 10−5 8 · 10−5 10 8 · 10−5 3 · 10−6 10 10−5 8 · 10−6 10 10−5 1 · 10−6 10 10−5 3 · 10−5 8 10−5 3 · 10−6 3 3 · 10−5 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Table 15: Training Time per Epoch for Different Models. Best in bold and second best is in italic form. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2025 ', 'modified_lines': '', 'original_lines': 'D.10 DISTILLATION RESULTS OF LION-S We have also used the same recipe from DeiT distillation Touvron et al. (2021) and distilled the RegNet network into LION-S. We observed that the distillation outperforms the original ViT-Tiny on the ImageNet dataset. The results are shown in the table below: Table 14: Distillation results of LION-S. Models Top-1 Acc. LION-S VIT-Tiny LION-S (Distilled) 67.95 70.23 70.44 D.11 TRAINING TIME FOR DIFFERENT MODELS IN VISION EXPERIMENTS ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-28 11:56:23
ICLR.cc/2025/Conference
IhK4krkGNe
bZFhv9POxt
[{'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'over 200,000 tokens. Results highlight the need for substantial advancements in LLMs to enhance their long-context comprehension and contribute effectively to computational literary analysis. ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'However, the evaluation of these models’ long-context abilities remains a challenge due to the limitations of current benchmarks. To address this gap, we introduce NovelQA, a benchmark tailored for evaluating LLMs with complex, extended ', 'modified_lines': 'narratives. NovelQA, constructed from English novels, offers a unique blend of complexity, length, and narrative coherence, making it an ideal tool for assessing deep textual understanding in LLMs. This paper details the design and construc- tion of NovelQA, focusing on its comprehensive manual annotation process and the variety of question types aimed at evaluating nuanced comprehension. Our evaluation of long-context LLMs on NovelQA reveals significant insights into their strengths and weaknesses. Notably, the models struggle with multi-hop rea- soning, detail-oriented questions, and handling extremely long inputs, averaging ', 'original_lines': 'narratives. NovelQA offers a unique blend of complexity, length, and narrative coherence, making it an ideal tool for assessing deep textual understanding in LLMs, and is constructed from English novels. This paper details the design and construction of NovelQA, focusing on its comprehensive manual annotation pro- cess and the variety of question types aimed at evaluating nuanced comprehension. Our evaluation of long-context LLMs on NovelQA reveals significant insights into their strengths and weaknesses. Notably, the models struggle with multi-hop reasoning, detail-oriented questions, and handling extremely long inputs, averaging ', 'after_paragraph_idx': 2, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'for assessing extremely long-context understanding, offering a refined and comprehensive tool for ad- vancing natural language processing capabilities. We construct NovelQA based on novels in English, which are also ideal for testing long-context modeling because they are long and complex, with plots that are closely linked from start to end. We select novels from various eras, genres, and formats to enhance diversity. The annotation process is performed by a group of expert annotators, all of whom are holding or pursuing a degree in English Literature and have a strong interest in and familiarity with the novels they annotate. Each question is paired with a ‘golden answer’ and corresponding textual evidences from the novels. And we categorize them by complexity and aspect for detailed analysis. Figure 2 presents two examples, while Table 1 details the distribution of question types. ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'To fill this gap, we introduce NovelQA, a benchmark crafted to specifically evaluate LLMs’ per- formance on texts with averaged context windows exceeding 200,000 tokens. Unlike existing benchmarks (Shaham et al., 2023; An et al., 2023; Adams et al., 2024), NovelQA addresses the need ', 'modified_lines': ' ∗ Equal Contribution † Correponding to Yue Zhang ([email protected]) and Qian Wang ([email protected]). 1 Published as a conference paper at ICLR 2025 Figure 1: Trend of context window size of LLMs (Orange) and average token length of long-range benchmarks (Green). NovelQA is highlighted with a star. Figure 2: Illustrative examples from NovelQA: This figure showcases two sample questions. For each question, models are evaluated under two distinct settings – multichoice, where the task is to select the correct answer from four options, and Generative, where the model generates an answer. ', 'original_lines': ' 1 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 Under review as a conference paper at ICLR 2025 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 Figure 1: Trend of context window size of LLMs (Orange) and average token length of long-range benchmarks (Green). NovelQA is highlighted with a star. Figure 2: Illustrative examples from NovelQA: This figure showcases two sample questions. For each question, models are evaluated under two distinct settings – multichoice, where the task is to select the correct answer from four options, and Generative, where the model generates an answer. ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 5}, {'section': '2 RELATED WORK', 'after_section': '2 RELATED WORK', 'context_after': '2024b; Yu et al., 2024). When entering the era of Large Language Models, the context window length has been much longer than ever, and the decoder-only LLMs have been the mainstream of language models and they have faced specific problems, such as the Lost-in-middle issue (Liu et al., 2023). Thus, ', 'paragraph_idx': 12, 'before_section': None, 'context_before': '2 RELATED WORK ', 'modified_lines': 'Long-Range Benchmarks Evaluating the ability of long-context Large Language Models has been a hot topic (Koˇciský et al., 2018; Tay et al., 2021; Shaham et al., 2023; Pang et al., 2022; Wang et al., ', 'original_lines': 'Long-Range Benchmarks. Evaluating the ability of long-context Large Language Models has been a hot topic (Koˇciský et al., 2018; Tay et al., 2021; Shaham et al., 2023; Pang et al., 2022; Wang et al., ', 'after_paragraph_idx': 12, 'before_paragraph_idx': None}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '3 Figure 3: Token Count Distribution in NovelQA, including Copyrighted (left) and Public Domain Only (right). The token count of both the novel and the questions are counted. The tokenization ', 'paragraph_idx': 14, 'before_section': '2 RELATED WORK', 'context_before': 'through human effort of experts. We present a detailed comparison between NovelQA and other benchmarks in Appendix Fig 5. ', 'modified_lines': 'We also discuss related Long-Context Language Modeling methods in Appendix Sec C.1. 1We have released the demonstrations and input of NovelQA, and created a leaderboard. More details can be found in https://novelqa.github.io/. And NovelQA is released under the Apache-2.0 License. For the public access, we have released all constructed data on Huggingface https://huggingface.co/ datasets/NovelQA/NovelQA and an evaluation system on Codabench https://www.codabench. org/competitions/2727/. However, we only release public domain novels. Therefore, we offer two types of metrics in the evaluation system and leaderboard: one for evaluating QAs within public domain novels and another for evaluating QAs across all novels. Published as a conference paper at ICLR 2025 ', 'original_lines': 'We also discuss related Long-Context Language Modeling methods in Appendix Sec C. 3 DATA 3.1 DATASET DESCRIPTION Data Formulation. Every novel (N ) in the dataset corresponds to multiple pieces of annotated data (di). Each piece of data consists of the following domains, question (Qi), answer (Ai), multichoices (ai,0, ai,1, ai,2, and ai,3), gold label (ai,gold), evidences (si,0, si,1, ...), and type (Complxi and 1We have released the demonstrations and input of NovelQA, and created a leaderboard. More details can be found in RemovedforSubmission. And NovelQA is released under the Apache-2.0 License. Under review as a conference paper at ICLR 2025 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 13}, {'section': '3.1 DATASET DESCRIPTION', 'after_section': '3.1 DATASET DESCRIPTION', 'context_after': 'question and the aspect that the question focuses on. By complexity, the data are categorized into three complexity levels, multi-hop (35.0%), single-hop (42.8%), and detail (22.2%). The order of complexity is as follows: multi-hop > detail > single-hop. By the aspect that each question focuses have a total of 2305 questions, of which 1640 are from 65 public domain novels, while the remaining 665 are from 24 copyrighted novels. According to the classification above, the distribution of the questions in our dataset is displayed in Table 1. We also list the ability tested by each kind of question in Appendix Table 10. 3.2 DATA COLLECTION AND ANNOTATION annotation procedure consists of two phases: (1) Template-based phase: The annotators can fill novel is either annotated by only one individual, or consistent in version across annotators, despite minor variations among different editions. Each annotator contributes to a typically small number of 20-30 questions per novel. This approach avoids forcing annotators annotating questions on review procedure follows the criteria of minimizing factual errors, enhancing expression clarity, and maintaining challenges to LLMs. Besides, we ensured that all questions are based on factual descriptions and eliminated any subjective ones. Consequently, only 79.4% of the collected data are ', 'paragraph_idx': 21, 'before_section': '3.1 DATASET DESCRIPTION', 'context_before': 'Meanwhile, the evidences domain consists of either the original excerpts from the novel, or the reasoning steps written by the annotator. ', 'modified_lines': 'Book Selection The books in NovelQA contains 65 free public-domain books from the Project Gutenberg2 and 24 copyright protected books purchased from the Internet. The selection of the books follows a criteria to ensure the diversities in the eras, styles, and themes, thus we decided to included a portion of copyrighted novels. All selected books exceed 50K words (approximately 67k tokens) and are in English. The input token count of each novel is calculated by adding the book-length to the lengths of its related questions. The distribution of the token count is illustrated in Figure 3. Question Distribution The annotated questions can be classified by the complexity of solving the on, the data entails seven types. A detailed specification of each type is listed in Appendix C.2.2. We 2https://www.gutenberg.org/ 4 65876200K300K400K500K600K84721200K300K400K500K600KMin CountMedianMean Published as a conference paper at ICLR 2025 Procedure Overview The annotation process is performed by a group of expert annotators.The entities into 10+ templates (see Appendix Sec C.2.1) that we design to be related to multi-hop or detailed information. This phase entails half of the data, mainly contributing to the multi-hop ones. (2) Free-formed phrase: To ensure the diversity of question expression and align the questions to the natural distribution, our second half of the data is annotated without a template, namely, the annotators contribute to any difficult questions that they come up with freely. Annotator Recruitment Our annotators are predominantly English Language and Literature uni- versity students or those with a keen interest in English novels, recruited from local universities. These students are accustomed to closely reading English novels for coursework or as a hobby, and writing reports or papers on them. Before annotation, each annotator was instructed to read our annotation instruction to understand the requirements and sign to agree to participate. Annotators are allowed to select novels for annotation based on their familiarity, ensuring they had previously read and comprehensively understood the texts. Meanwhile, we make sure the selected books meet our standards of enough word count and well-developed narratives. We also ensure that each selected unfamiliar content. Time Consumption and Rewards Given the annotator’s familiarity with their chosen novels and their experience with similar questions in their academic assignments, creating questions based on their knowledge becomes a manageable task within a reasonable time cost. The annotation reward is of $1.11 to $1.39 per tuple. As an average annotator can write 5 to 6 pieces of data at full speed according to our observation, the $5.56 to $8.34 hourly wage is above the local legal minimum wage of $2.78/hour. The annotation process costs around $3,500. Template Design The first annotation phase relies on a question template, which requires the annotator to fill in the entities from the novel to form valid questions. To design templates, we carried out sufficient pre-tests on GPT-4 and Claude-2.1 to analyze their possible weaknesses in long-input QA and novel knowledge memorization. Our pre-test shows that they usually fail to tackle information spanning over multiple chapters, as well as lack attention to details that have no contribution to the main theme. Meanwhile, we also refer to around fifteen books on novel and narration theories (e.g. Forster, 1927; Tobias, 2012; Schmidt, 2012; McKee, 2005) to ensure our template covers more aspects that a novel can discuss (e.g., character, setting, theme). Templates are ensured to test on facts (e.g., events, entities, numbers) that can be traced back to specific evidences from the books, instead of on any subjective feelings or analysis of the readers. Quality Control The created data is manually double-checked by three authors of this work. The ', 'original_lines': 'Book Selection. We source public domain novels from Project Gutenberg, and purchase e-books from internet if it is necessary. We aim to enhance diversity by selecting novels from various eras, genres, and formats. Recognizing that some newer, popular novels are still under copyright protection, NovelQA inevitably includes some copyrighted novels, comprising a mixture of 65 novels in public domain and 24 ones in copyright-protection. 2 All selected books exceed 50K words (approximately 67k tokens) and are in English. The input token count of each novel in NovelQA is calculated by adding the book-length to the lengths of its related questions. The distribution of the token count is illustrated in Figure 3. Question Distribution. The annotated questions can be classified by the complexity of solving the on, the data entails seven types. A detailed specification of each type is listed in Appendix C. We 2We evaluate all annotated questions in this paper and conduct analysis. For the public access, we have released all constructed data. However, we only release public domain novels. Therefore, we offer two types of metrics in the evaluation system and leaderboard: one for evaluating QAs within public domain novels and another for evaluating QAs across all novels. 4 65876200K300K400K500K600K84721200K300K400K500K600KMin CountMedianMean Under review as a conference paper at ICLR 2025 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 Procedure Overview. The annotation process is performed by a group of expert annotators.The entities into 10+ templates (see Appendix Sec C) that we design to be related to multi-hop or detailed information. This phase entails half of the data, mainly contributing to the multi-hop ones. (2) Free-formed phrase: To ensure the diversity of question expression and align the questions to the natural distribution, our second half of the data is annotated without a template, namely, the annotators contribute to any difficult questions that they come up with freely. Annotator Recruitment and Instruction. Our annotators are predominantly English Language and Literature university students or those with a keen interest in English novels, recruited from local universities. These students are accustomed to closely reading English novels for coursework or as a hobby, and writing reports or papers on them. Before annotation, each annotator was instructed to read our annotation instruction to understand the requirements and sign to agree to participate. Annotators are allowed to select novels for annotation based on their familiarity, ensuring they had previously read and comprehensively understood the texts. Meanwhile, we make sure the selected books meet our standards of enough word count and well-developed narratives. We also ensure that each selected unfamiliar content. Time Consumption and Rewards. Given the annotator’s familiarity with their chosen novels and their experience with similar questions in their academic assignments, creating questions based on their knowledge becomes a manageable task within a reasonable time cost. The annotation reward is of $1.11 to $1.39 per tuple. As an average annotator can write 5 to 6 pieces of data at full speed according to our observation, the $5.56 to $8.34 hourly wage is above the local legal minimum wage of $2.78/hour. The annotation process costs around $3,500. Template Design. The first annotation phase relies on a question template, which requires the annotator to fill in the entities from the novel to form valid questions. To design templates, we carried out sufficient pre-tests on GPT-4 and Claude-2.1 to analyze their possible weaknesses in long-input QA and novel knowledge memorization. Our pre-test shows that they usually fail to tackle information spanning over multiple chapters, as well as lack attention to details that have no contribution to the main theme. Meanwhile, we also refer to around fifteen books on novel and narration theories (e.g. Forster, 1927; Tobias, 2012; Schmidt, 2012; McKee, 2005) to ensure our template covers more aspects that a novel can discuss (e.g., character, setting, theme). Templates are ensured to test on facts (e.g., events, entities, numbers) that can be traced back to specific evidences from the books, instead of on any subjective feelings or analysis of the readers. Quality Control. The created data is manually double-checked by three authors of this work. The ', 'after_paragraph_idx': 22, 'before_paragraph_idx': 20}, {'section': '3.1 DATASET DESCRIPTION', 'after_section': '3.1 DATASET DESCRIPTION', 'context_after': 'question and its golden answer and randomly permute the four answers.3 And we check those 3Our pilot study compared distractors generated by both GPT-4 and Claude 2.1. Interestingly, GPT-4 ', 'paragraph_idx': 26, 'before_section': '3.1 DATASET DESCRIPTION', 'context_before': 'takes them around 2-3 hours or less to complete each novel. The IAA test shows a score of 94.6% in Cohen’s Kappa, indicating a high agreement among annotators. ', 'modified_lines': 'Distractions for Multichoice Setting We use GPT-4 to generate three distracting options for each ', 'original_lines': 'Distractions for Multichoice Setting. We use GPT-4 to generate three distracting options for each ', 'after_paragraph_idx': 26, 'before_paragraph_idx': 25}, {'section': '4.1', 'after_section': '4.1', 'context_after': 'in the multichoice setting, and end part. The prompt structure is shown in Appendix Table 11. following Bai et al. (2023); Li et al. (2023a); An et al. (2023), to meet with the max input length, while keeping questions and other prompts complete. (gpt-4-0125-preview) for the evaluation of generative responses in our study, which is also applied in other long-range benchmark studies (An et al., 2023; Li et al., 2023a). We further conducted a human evaluation on 800 pieces of generative outputs and carried an inter-evaluator agreement ', 'paragraph_idx': 32, 'before_section': '4.1', 'context_before': 'IMPLEMENTATIONS ', 'modified_lines': 'Settings We employ two evaluation settings: a generative setting where models directly generate short answers, and a multichoice setting with four provided options. Prompts We use uniform prompts for all LLMs, with a start part, novel content, questions, choices Truncation Due to input length limitations, we truncated the novel content from the end to the front Evaluating Generative Results Following the findings in Wang et al. (2023) which highlight GPT- 4’s proficiency in assessing the accuracy of short machine-generated answers, we employ GPT-4 ', 'original_lines': 'Settings. To thoroughly test the abilities of these LLMs, we employ two evaluation settings: a generative setting where models directly generate short answers, and a multichoice setting with four provided options. Prompts. We use uniform prompts for all LLMs, with a start part, novel content, questions, choices Truncation. Due to input length limitations, we truncated the novel content from the end to the front Evaluating Generative Results. Following the findings in Wang et al. (2023) which highlight GPT-4’s proficiency in assessing the accuracy of short machine-generated answers, we employ GPT-4 ', 'after_paragraph_idx': 33, 'before_paragraph_idx': 32}, {'section': '4.1', 'after_section': None, 'context_after': 'a challenge due to the immense GPU memory required, for example, it takes roughly 2.5T memory to calculate one attention matrix for a 7B model with a 200K-token input, while our local device is a 4 × 80G A100. To address this, we utilize the LMDeploy (Contributors, 2023) (based on ', 'paragraph_idx': 37, 'before_section': '4.1', 'context_before': 'clear, verifiable answers with a factual basis in the text. This objectivity significantly reduces the impact of model bias in LLM-as-Judge evaluation ', 'modified_lines': 'Commercial LLMs The APIs of commercial LLMs utilized are gpt-4-0125-preview, gpt-4o-mini, Claude-2.1, Claude-3-sonnet and Claude-3.5-sonnet. Open-source LLMs Running long-context LLMs on extremely long inputs, such as 200K tokens, is ', 'original_lines': 'Commercial LLMs. The APIs of commercial LLMs utilized are gpt-4-0125-preview, Claude-2.1, Claude-3-sonnetand Claude-3.5-sonnet. Open-source LLMs. Running long-context LLMs on extremely long inputs, such as 200K tokens, is ', 'after_paragraph_idx': None, 'before_paragraph_idx': 36}, {'section': '4.1', 'after_section': '4.1', 'context_after': 'courses and projects, several books have been read by more than one student. We selected such books and have had the readers engaged in a two-round answering process on novels they had not previously annotated. The first round was in a generative setting, and the second round was in a multichoice ', 'paragraph_idx': 38, 'before_section': None, 'context_before': 'which is only compatible with several LLMs. Therefore, we choose InternLM2-Chat-7b-200K, InternLM2-Chat-20b-200K, Llama-3.1-8b, and Llama-3.1-70b for our experiments. ', 'modified_lines': 'Human Performance As most of the annotators are from the same university and share common ', 'original_lines': 'Human Performance. As most of the annotators are from the same university and share common ', 'after_paragraph_idx': 38, 'before_paragraph_idx': None}, {'section': '4.2 MAIN RESULTS', 'after_section': None, 'context_after': '7 4.3 RESULTS BY THE QUESTION TYPE An in-depth analysis of model performance across question types reveals nuanced insights into their comprehension abilities in both generative and multichoice settings, detailed in Table 3 and different formats but also illuminates the challenges in narrative comprehension, contributing to both NLP and computational literary research. ', 'paragraph_idx': 40, 'before_section': '4.2 MAIN RESULTS', 'context_before': 'answer from scratch, as opposed to selecting from provided options. We have also observed three typical errors, hallucination, overlooking, and miscounting, a detailed ', 'modified_lines': 'analysis is conducted in Appendix Sec C.4.6. Published as a conference paper at ICLR 2025 Figure 4: Analysis of Accuracy in Generative Setting by Absolute and Relative Token Positions: The two figures above illustrates the accuracy, plotted against the absolute token indexes (left) and the percentage position (right) of each question’s relevant evidences. The x-axis of the absolute token position figure (left), reflecting token indexes, is folded on the right due to the long-tails. Appendix C.4.3, respectively. This analysis not only highlights the models’ weaknesses across ', 'original_lines': 'analysis is conducted in Appendix Sec C.2.4. Under review as a conference paper at ICLR 2025 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 Figure 4: Analysis of Accuracy in Generative Setting by Absolute and Relative Token Positions: The two figures above illustrates the accuracy, plotted against the absolute token indexes (left) and the percentage position (right) of each question’s relevant evidences in the novels. The x-axis of the absolute token position figure (left), reflecting token indexes, is folded on the right due to the long-tails. Appendix C.2.4, respectively. This analysis not only highlights the models’ weaknesses across ', 'after_paragraph_idx': None, 'before_paragraph_idx': 39}, {'section': '4.4 RESULTS BY THE POSITION', 'after_section': '4.4 RESULTS BY THE POSITION', 'context_after': 'models show improved performance on questions where the necessary evidence is located before the 100K token mark. This trend highlights a challenge for LLMs in accessing and processing 8 Table 4: Model Performance Analysis Pre and Post 100K Tokens: The accuracies of questions on evidences pre-100K and post-100K are calculated separately and compared, where the ‘100K’ ', 'paragraph_idx': 46, 'before_section': '4.4 RESULTS BY THE POSITION', 'context_before': 'where the absolute position refers to the specific token index within the text, and the relative position is normalized against the total length of the novel, scaled to a 0%-100% range. ', 'modified_lines': 'Absolute Position Analysis In the generative setting, as depicted in Figure 4 (left), all evaluated information beyond this threshold, suggesting a diminished capacity to handle very long inputs. The Published as a conference paper at ICLR 2025 ', 'original_lines': 'Absolute Position Analysis. In the generative setting, as depicted in Figure 4 (left), all evaluated Under review as a conference paper at ICLR 2025 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 ', 'after_paragraph_idx': 46, 'before_paragraph_idx': 46}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'of evidence position in model performance. We also present the relationship between the accuracy and the absolute position of evidence, grouping by pre-100K and post-100K, in Table 4. understand if the proportional location of evidence influences model accuracy. This analysis, shown indicates that models maintain relatively consistent performance across various relative positions, suggesting that long-context LLMs’ effectiveness is not significantly affected by the evidence’s relative position within the standardized length of novels. ', 'paragraph_idx': 8, 'before_section': None, 'context_before': '2.61 8.79 ', 'modified_lines': 'multichoice setting, detailed in Appendix C.4.4, follows a similar pattern, reinforcing the importance Relative Position Analysis By normalizing the evidence positions within the entire novel, we aim to in Figure 4 (right) for the generative setting and in the Appendix C.4.4 for the multichoice setting, ', 'original_lines': 'information beyond this threshold, suggesting a diminished capacity to handle very long inputs. The multichoice setting, detailed in Appendix C.2.4, follows a similar pattern, reinforcing the importance Relative Position Analysis. By normalizing the evidence positions within the entire novel, we aim to in Figure 4 (right) for the generative setting and in the Appendix C.2.4 for the multichoice setting, ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.4 RESULTS BY THE POSITION', 'after_section': None, 'context_after': '4.5 EVIDENCES RECALL ', 'paragraph_idx': 51, 'before_section': None, 'context_before': 'This analysis highlights the critical role of absolute evidence positioning in determining the accuracy of LLMs in processing long texts. Challenges arise when context beyond a specific token threshold. ', 'modified_lines': 'Conversely, the relative position within a normalized length has a minimal effect. ', 'original_lines': 'Conversely, the relative position within a normalized text length has a minimal effect on model performance. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.5 EVIDENCES RECALL', 'after_section': None, 'context_after': '9 Table 6: Close-book Performance across four LLMs on NovelQA. Unlike the standard scenario, models rely solely on internal knowledge without access to the novels. The parentheses indicate the ', 'paragraph_idx': 55, 'before_section': '4.5 EVIDENCES RECALL', 'context_before': 'in NovelQA again, with printing the supporting evidence simultaneously. We then prompt GPT-4 with the generated evidence alongside the annotated evidence to obtain its evaluation on the quality of retrieved evidence pieces. The evaluating matrix consists of the following three dimensions: ', 'modified_lines': 'correctness refers to whether the retrieved evidence is the same as the annotated evidence or with a similar correct meaning; relevance indicates whether the evidence is consistent with the answer; Published as a conference paper at ICLR 2025 ', 'original_lines': 'Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 55}, {'section': '4.4 RESULTS BY THE POSITION', 'after_section': None, 'context_after': 'sufficiency, whether the retrieved pieces of evidence are enough to support the answer. Each dimension is scored between 0 and 100 and an average score is further obtained through calculating the algorithmic mean on these three dimensions. Prompts involved in this evaluation procedure are The results, detailed in Table 5, show higher performances of GPT-4 and Claude 2.1. Moreover, though the scoring range is from 0 to 100, the four models all perform with low scores in evidence ', 'paragraph_idx': 51, 'before_section': None, 'context_before': '14.12 (-16.81) 15.51 (-16.86) ', 'modified_lines': 'presented in Appendix C.2.3. ', 'original_lines': 'correctness refers to whether the retrieved evidence is the same as the annotated evidence or with a similar correct meaning; relevance indicates whether the evidence is consistent with the answer; presented in Appendix C. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Edward Morgan Forster. Aspects of the Novel. Harcourt, Brace, 1927. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Guhao Feng, Yuntian Gu, Bohang Zhang, Hao-Tong Ye, Di He, and Liwei Wang. Towards revealing the mystery behind chain of thought: a theoretical perspective. ArXiv, abs/2305.15408, 2023. URL https://api.semanticscholar.org/CorpusID:258865989. ', 'modified_lines': '', 'original_lines': ' 11 Under review as a conference paper at ICLR 2025 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'OpenAI. GPT-4 technical report. arXiv, 2023a. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Arseny Moskvichev and Ky-Vinh Mai. Narrativexl: A large-scale dataset for long-term memory models, 2023. URL https://arxiv.org/abs/2305.13877. ', 'modified_lines': '', 'original_lines': ' 12 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Yuhuai Wu, Markus N Rabe, DeLesley Hutchins, and Christian Szegedy. Memorizing transformers. arXiv preprint arXiv:2203.08913, 2022b. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'A memory-augmented transformer for sequence modeling. In Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022, November 2022a. ', 'modified_lines': '', 'original_lines': '13 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'the outputs such as chain-of-thought prompting can enhance their counting ability. Our test does show that models make mistakes with numbers. Though the errors in the case of generative responses may be due to not following instructions and simply outputting ‘multiple times’ instead of the desired ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'texts, thus revealing distinct comprehension challenges faced by LLMs when processing texts of varying lengths. ', 'modified_lines': '', 'original_lines': 'Table 16: Correlation between the distance among multiple evidences in multi-hop questions and the accuracy in the generative scenario. The max distance among all evidence distances are considered as the evidence distance of the question. The overall negative correlations show that the evidence distances are negatively correlated with the accuracies. The interpretation of matrices can be found in Appendix C.2.4. Model Pearson Spearson Kendall GPT-4 Claude-3 InternLM-7b InternLM-20b -0.0734 (0.0879) -0.0120 (0.7899) -0.1140 (0.0083) -0.0634 (0.1407) -0.2511 (3.0696×e−09) -0.1826 (4.6228) -0.2287 (8.8721×e−08) -0.2017 (2.1954×e−06) -0.2031 (4.6129×e−09) -0.1385 (9.0407×e−05) -0.1841 (1.2663×e−07) -0.1626 (2.722×e−06) becoming the models’ inner knowledge, which is similar to (Chang et al., 2023)’s observation, and vice versa for those omitted details. These two factors contribute to the difficulty in the model’s recalling details and thus result in overlooking errors. Miscounting Researches (Li et al., 2023a; Feng et al., 2023) has revealed shortcomings in the counting ability of LLMs, especially autoregressive-decoder-based models, and methods unfolding ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-01 15:29:04
ICLR.cc/2025/Conference
bZFhv9POxt
M2ehQeGSdm
[{'section': '2 RELATED WORK', 'after_section': '2 RELATED WORK', 'context_after': 'procedure is accomplished through gpt-3.5-turbo-16k tokenizer. Table 1: Distribution of Question Types in NovelQA: This table provides a breakdown of questions ', 'paragraph_idx': 17, 'before_section': None, 'context_before': 'Published as a conference paper at ICLR 2025 Figure 3: Token Count Distribution in NovelQA, including Copyrighted (left) and Public Domain ', 'modified_lines': 'Only (right). The token counts of both the novel and the questions are counted. The tokenization ', 'original_lines': 'Only (right). The token count of both the novel and the questions are counted. The tokenization ', 'after_paragraph_idx': 17, 'before_paragraph_idx': None}, {'section': '3.2 DATA ANNOTATION', 'after_section': '3.2 DATA ANNOTATION', 'context_after': 'annotation instruction to understand the requirements and sign to agree to participate. Annotators are allowed to select novels for annotation based on their familiarity, ensuring they had previously read and comprehensively understood the texts. Meanwhile, we make sure the selected books meet our standards of enough word count and well-developed narratives. We also ensure that each selected novel is either annotated by only one individual, or consistent in version across annotators, despite minor variations among different editions. Each annotator contributes to a typically small number Template Design The first annotation phase relies on a question template, which requires the annotator to fill in the entities from the novel to form valid questions. To design templates, we carried out Quality Control The created data is manually double-checked by three authors of this work. The review procedure follows the criteria of minimizing factual errors, enhancing expression clarity, ', 'paragraph_idx': 25, 'before_section': '3.2 DATA ANNOTATION', 'context_before': 'Annotator Recruitment Our annotators are predominantly English Language and Literature uni- versity students or those with a keen interest in English novels, recruited from local universities. These students are accustomed to closely reading English novels for coursework or as a hobby, and ', 'modified_lines': 'writing reports or papers on them. Before annotation, each annotator was instructed to read the of 20-30 questions per novel. This approach avoids forcing annotators to annotate questions on unfamiliar content. Annotation Guideline While creating the QA tuples, our annotators are instructed to follow several principles below: (1) The annotators are either senior English language and literature college students, or students with high English test scores. (2) The annotators are required to read through the GPT-4 responses on example questions. (3) The annotators should choose novels above 50K tokens and read through the books they choose before annotation. (4) Evidence of each answer should be provided for validation purposes. Evidence should be as sufficient as possible. sufficient pre-tests on several LLMs to analyze their possible weaknesses in long-input QA and novel knowledge memorization. Our pre-test shows that they usually fail to tackle information spanning over multiple chapters, as well as lack attention to details that have no contribution to the main theme. Meanwhile, we also refer to around fifteen books on novel and narration theories (e.g. Forster, 1927; Tobias, 2012; Schmidt, 2012; McKee, 2005) to ensure our template covers more aspects that a novel can discuss (e.g., character, setting, theme). Templates are ensured to test on facts (e.g., events, entities, numbers) that can be traced back to specific evidences from the books, instead of on any subjective feelings or analysis of the readers. Time Consumption and Rewards Given the annotator’s familiarity with their chosen novels and their experience with similar questions in their academic assignments, creating questions based on their knowledge becomes a manageable task within a reasonable time cost. The annotation reward is of $1.11 to $1.39 per tuple. As an average annotator can write 5 to 6 pieces of data at full speed according to our observation, the $5.56 to $8.34 hourly wage is above the local legal minimum wage of $2.78/hour. The annotation process costs around $3,500. 5 Published as a conference paper at ICLR 2025 Table 2: Evaluation of Long-Context LLMs on NovelQA. This table presents the performance of four long-context LLMs, including both commercial models (GPT-4, GPT-4o-mini, Claude-2.1, Claude-3- Sonnet and Claude-3.5-Sonnet) and open-source, locally deployed models (InternLM2-Chat-7b/20b and Llama-3.1-8b/70b). Accuracy percentages are reported under two testing scenarios: multichoice and generative. The Max Length column denotes the maximum token length of each model. Max Length Multichoice Generative GPT-4 GPT-4o-mini Claude-2.1 Claude-3 Claude-3.5-Sonnet InternLM2-Chat-7b InternLM2-Chat-20b Llama-3.1-8B Llama-3.1-70B Human baseline 128K 128K 200K 200K 200K 200K 200K 128K 128K ∞ 71.80 71.85 66.84 71.11 77.92 43.51 49.18 62.31 69.39 97.00 46.88 53.32 46.04 53.66 62.30 30.90 32.37 42.65 51.50 90.00 ', 'original_lines': 'writing reports or papers on them. Before annotation, each annotator was instructed to read our of 20-30 questions per novel. This approach avoids forcing annotators annotating questions on unfamiliar content. Time Consumption and Rewards Given the annotator’s familiarity with their chosen novels and their experience with similar questions in their academic assignments, creating questions based on their knowledge becomes a manageable task within a reasonable time cost. The annotation reward is of $1.11 to $1.39 per tuple. As an average annotator can write 5 to 6 pieces of data at full speed according to our observation, the $5.56 to $8.34 hourly wage is above the local legal minimum wage of $2.78/hour. The annotation process costs around $3,500. sufficient pre-tests on GPT-4 and Claude-2.1 to analyze their possible weaknesses in long-input QA and novel knowledge memorization. Our pre-test shows that they usually fail to tackle information spanning over multiple chapters, as well as lack attention to details that have no contribution to the main theme. Meanwhile, we also refer to around fifteen books on novel and narration theories (e.g. Forster, 1927; Tobias, 2012; Schmidt, 2012; McKee, 2005) to ensure our template covers more aspects that a novel can discuss (e.g., character, setting, theme). Templates are ensured to test on facts (e.g., events, entities, numbers) that can be traced back to specific evidences from the books, instead of on any subjective feelings or analysis of the readers. ', 'after_paragraph_idx': 25, 'before_paragraph_idx': 25}, {'section': '3.1 DATASET OVERVIEW', 'after_section': None, 'context_after': '3Our pilot study compared distractors generated by both GPT-4 and Claude 2.1. Interestingly, GPT-4 generated slightly more challenging distractors - both models scored approximately 0.5% lower on GPT-4’s distractors compared to Claude 2.1’s. This indicates no advantageous bias for GPT-4. Table 3: Model Performance by Question Type in Generative Setting: This table details the accuracy scores of four models across different question types. Question types include details (dtl), multi-hop ', 'paragraph_idx': 22, 'before_section': None, 'context_before': 'Cohen’s Kappa, indicating a high agreement among annotators. Distractions for Multichoice Setting We use GPT-4 to generate three distracting options for each ', 'modified_lines': 'question and its golden answer and randomly permute the four answers.3 We checked those distrac- tions and rewrote those with similar meaning with the golden answers manually when we double check the data. 4 EXPERIMENTS We focus on long-context models meeting three criteria: a context window of at least 128,000 tokens, accessibility via a full API or public release, and chat functionality. For commercial models, our selection includes GPT-4-128K (OpenAI, 2023a) and Claude 2.1-200K (Anthropic, 2023). Among open-source options, we evaluated models like InternLM2-chat (Team, 2023). 4.1 IMPLEMENTATIONS Settings We employ two evaluation settings: a generative setting where models directly generate short answers, and a multichoice setting with four provided options. Prompts We use uniform prompts for all LLMs, with a start part, novel content, questions, choices in the multichoice setting, and an end part. The prompt structure is shown in Appendix Table 11. Truncation Due to input length limitations, we truncated the novel content from the end to the front following Bai et al. (2023); Li et al. (2023a); An et al. (2023), to meet with the max input length, while keeping questions and other prompts complete. 6 Published as a conference paper at ICLR 2025 ', 'original_lines': 'question and its golden answer and randomly permute the four answers.3 And we check those 5 Published as a conference paper at ICLR 2025 Table 2: Evaluation of Long-Context LLMs on NovelQA. This table presents the performance of four long-context LLMs, including both commercial models (GPT-4, GPT-4o-mini, Claude-2.1, Claude-3- Sonnet and Claude-3.5-Sonnet) and open-source, locally deployed models (InternLM2-Chat-7b/20b and Llama-3.1-8b/70b). Accuracy percentages are reported under two testing scenarios: multichoice and generative. The Max Length column denotes the maximum token length of each model. Max Length Multichoice Generative GPT-4 GPT-4o-mini Claude-2.1 Claude-3 Claude-3.5-Sonnet InternLM2-Chat-7b InternLM2-Chat-20b Llama-3.1-8B Llama-3.1-70B Human baseline 128K 128K 200K 200K 200K 200K 200K 128K 128K ∞ 71.80 71.85 66.84 71.11 77.92 43.51 49.18 62.31 69.39 97.00 46.88 53.32 46.04 53.66 62.30 30.90 32.37 42.65 51.50 90.00 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': 'Commercial LLMs The APIs of commercial LLMs utilized are gpt-4-0125-preview, gpt-4o-mini, Claude-2.1, Claude-3-sonnet and Claude-3.5-sonnet. ', 'paragraph_idx': 12, 'before_section': None, 'context_before': '- - ', 'modified_lines': 'Evaluating Generative Results Previous researches have proved the ability of LLMs in evaluating the machine-generated answers align with human judgements (Wang et al., 2023; An et al., 2023; Li et al., 2023a) After a pilot study showing that the models show no several preference towards the answers generated by its own kinds (See Appendix C.4.2, we choose GPT-4 to evaluate our generated results. We further conducted a human evaluation on 800 pieces of generative outputs and carried out an inter-evaluator agreement (IEA) test between two human evaluators and the GPT-4 evaluator. In the IEA test, annotators also serve as human evaluators to the novels they were familiar with. Their evaluations were compared to those of GPT-4 evaluators, with Cohen’s kappa score calculated to measure agreement. As showing in Table 7, the result of 89.25% in Cohen’s Kappa indicates a high agreement towards the GPT-4 evaluating results. NovelQA primarily consists of objective questions, which have clear, verifiable answers with a factual basis in the text. This objectivity significantly reduces the impact of model bias in LLM-as-Judge evaluation ', 'original_lines': 'distractions and rewrite those with similar meaning with the golden answers manually when we double check the data. 3.3 ADVANTAGES Our NovelQA dataset serves as a new benchmark for evaluating long-context understanding, distin- guished by several key advantages. Firstly, it surpasses existing benchmarks in length, offering a rigorous test of a model’s ability to navigate and comprehend significantly longer texts. Secondly, the inclusion of clear evidences alongside questions ensures that evaluations are grounded in concrete tex- tual support, enhancing the reliability of assessments. Furthermore, the dataset emphasizes questions that require attention to detailed information, challenging models to move beyond superficial impres- sions to extract specific, nuanced answers. Questions, golden answers, and evidences of the dataset are entirely manually annotated and carefully checked, ensuring high-quality, nuanced questions and answers that reflect complex human thought processes. To prevent against data leakage, we will not release golden answers for the test set, minimizing the risk of overfitting. These features, combined with the dataset’s comprehensive coverage of diverse narratives and meticulous construction, make NovelQA a valuable resource for advancing long-context understanding. 4 EXPERIMENTS We focus on long-context models meeting three criteria: a context window of at least 128,000 tokens, accessibility via a full API or public release, and chat functionality. For commercial models, our 6 Published as a conference paper at ICLR 2025 selection includes GPT-4-128K (OpenAI, 2023a) and Claude 2.1-200K (Anthropic, 2023). Among open-source options, we evaluated models like InternLM2-chat (Team, 2023). 4.1 IMPLEMENTATIONS Settings We employ two evaluation settings: a generative setting where models directly generate short answers, and a multichoice setting with four provided options. Prompts We use uniform prompts for all LLMs, with a start part, novel content, questions, choices in the multichoice setting, and end part. The prompt structure is shown in Appendix Table 11. Truncation Due to input length limitations, we truncated the novel content from the end to the front following Bai et al. (2023); Li et al. (2023a); An et al. (2023), to meet with the max input length, while keeping questions and other prompts complete. Evaluating Generative Results Following the findings in Wang et al. (2023) which highlight GPT- 4’s proficiency in assessing the accuracy of short machine-generated answers, we employ GPT-4 (gpt-4-0125-preview) for the evaluation of generative responses in our study, which is also applied in other long-range benchmark studies (An et al., 2023; Li et al., 2023a). We further conducted a human evaluation on 800 pieces of generative outputs and carried an inter-evaluator agreement (IEA) test between two human evaluators and the GPT-4 evaluator. In the IEA test, annotators also serve as human evaluators to the novels they were familiar with. Their evaluations were compared to those of GPT-4 evaluators, with Cohen’s kappa score calculated to measure agreement. Human evaluators, familiar with the content, typically completed their reviews in under half an hour per novel. As showing in Table 7, the result of 89.25% in Cohen’s Kappa indicates a high agreement towards the GPT-4 evaluating results. NovelQA primarily consists of objective questions, which have clear, verifiable answers with a factual basis in the text. This objectivity significantly reduces the impact of model bias in LLM-as-Judge evaluation ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1', 'after_section': '4.1', 'context_after': 'annotated. The first round was in a generative setting, and the second round was in a multichoice setting. This process was conducted on 5 novels with a total of 100 questions. The result shows that human performance scored 90 in the generative setting and 97 in the multiple-choice setting. ', 'paragraph_idx': 41, 'before_section': None, 'context_before': 'which is only compatible with several LLMs. Therefore, we choose InternLM2-Chat-7b-200K, InternLM2-Chat-20b-200K, Llama-3.1-8b, and Llama-3.1-70b for our experiments. ', 'modified_lines': 'Human Performance We selected books that have been read by multiple annotators, and then have had the readers engaged in a two-round answering process on novels they had not previously ', 'original_lines': 'Human Performance As most of the annotators are from the same university and share common courses and projects, several books have been read by more than one student. We selected such books and have had the readers engaged in a two-round answering process on novels they had not previously ', 'after_paragraph_idx': 41, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'We have also observed three typical errors, hallucination, overlooking, and miscounting, a detailed analysis is conducted in Appendix Sec C.4.6. 4.3 RESULTS BY THE QUESTION TYPE ', 'paragraph_idx': 9, 'before_section': '1 INTRODUCTION', 'context_before': 'generative and multichoice settings, respectively) suggest there is considerable room for improvement in long-context understanding compared with human readers. This is especially true in the generative setting where understanding and recall over long contexts are more challenging. Additionally, ', 'modified_lines': 'commercial models (Claude-3.5-Sonnet and GPT-4o-mini) outperform open-source models in this benchmark. Among closed-source models, LLama family models outperform the InternLM models. All models show a drop in performance in the generative setting compared to the multichoice setting. This indicates the increased challenge in generating a correct answer from scratch, as opposed to selecting from provided options. 7 Published as a conference paper at ICLR 2025 Figure 4: Analysis of Accuracy in Generative Setting by Absolute and Relative Token Positions: The two figures above illustrate the accuracy, plotted against the absolute token indexes (left) and the percentage position (right) of each question’s relevant evidences. The x-axis of the absolute token position figure (left), reflecting token indexes, is folded on the right due to the long-tails. ', 'original_lines': 'commercial models (GPT-4 and Claude 3/2.1) outperform open-source models (InternLM2-Chat- 7b and 20b) in this benchmark. All models show a drop in performance in the generative setting compared to the multichoice setting. This indicates the increased challenge in generating a correct answer from scratch, as opposed to selecting from provided options. 7 Published as a conference paper at ICLR 2025 Figure 4: Analysis of Accuracy in Generative Setting by Absolute and Relative Token Positions: The two figures above illustrates the accuracy, plotted against the absolute token indexes (left) and the percentage position (right) of each question’s relevant evidences. The x-axis of the absolute token position figure (left), reflecting token indexes, is folded on the right due to the long-tails. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 9}, {'section': '4.4 RESULTS BY POSITION', 'after_section': None, 'context_after': 'To delve deeper into how long-context LLMs navigate extremely long inputs, we segment novels into two categories based on length: 65k-100K and over 100K tokens. We then examine model accuracy ', 'paragraph_idx': 53, 'before_section': None, 'context_before': '2.61 8.79 ', 'modified_lines': 'Absolute Position In the generative setting, as depicted in Figure 4 (left), all evaluated models show improved performance on questions where the necessary evidence is located before the 100K token mark. This trend highlights a challenge for LLMs in accessing and processing information beyond this threshold, suggesting a diminished capacity to handle very long inputs. The multichoice setting, detailed in Appendix C.4.4, follows a similar pattern, reinforcing the importance of evidence position in model performance. We also present the relationship between the accuracy and the absolute position of evidence, grouping by pre-100K and post-100K, in Table 4. Relative Position By normalizing the evidence positions within the entire novel, we aim to understand if the proportional location of evidence influences model accuracy. This analysis, shown in Figure 4 (right) for the generative setting and in the Appendix C.4.4 for the multichoice setting, indicates that models maintain relatively consistent performance across various relative positions, suggesting that long-context LLMs’ effectiveness is not significantly affected by the evidence’s relative position within the standardized length of novels. ', 'original_lines': 'multichoice setting, detailed in Appendix C.4.4, follows a similar pattern, reinforcing the importance of evidence position in model performance. We also present the relationship between the accuracy and the absolute position of evidence, grouping by pre-100K and post-100K, in Table 4. Relative Position Analysis By normalizing the evidence positions within the entire novel, we aim to understand if the proportional location of evidence influences model accuracy. This analysis, shown in Figure 4 (right) for the generative setting and in the Appendix C.4.4 for the multichoice setting, indicates that models maintain relatively consistent performance across various relative positions, suggesting that long-context LLMs’ effectiveness is not significantly affected by the evidence’s relative position within the standardized length of novels. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1', 'after_section': None, 'context_after': 'B ETHICS STATEMENTS We are dedicated to ensuring that NovelQA serves exclusively for academic and scientific endeavors. ', 'paragraph_idx': 37, 'before_section': None, 'context_before': 'Language Limitation: NovelQA, and all associated data are exclusively in English. The following researches may consider to extend the language covered. ', 'modified_lines': 'Truncation: Truncation is a standard practice for adapting longer texts to fit within the constrained context windows of LLMs, especially when evaluating their performance on long-context tasks. While it’s possible that some evidence may fall within the truncated parts. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.2 MAIN RESULTS', 'after_section': None, 'context_after': 'Published as a conference paper at ICLR 2025 ', 'paragraph_idx': 44, 'before_section': None, 'context_before': 'Ability to subtle or ambiguous information and reason. ', 'modified_lines': 'absolute token index) can be found in Table 12: Approximately 18.4% of the evidence is located beyond the 130K token mark (outside GPT-4’s context window), and 6.4% is beyond 210K tokens (outside Claude and InternLM’s context windows). Given the varying lengths of novels, we also analyzed the relative position distribution (by relative token position: token position / total tokens in the corresponding book) in Table 13: Our analysis reveals that the highest proportion of evidence instances (25.17%) occurs within the first 10% of the novels, while the distribution across the middle sections is relatively uniform. This concentration in the early parts of the novels can be attributed to the initial introduction of characters and plot elements. The first occurrence of evidence related to these introductions naturally falls in the earlier sections of the novels. It’s important to note that during the question formulation process, we did not deliberately adjust the distribution of questions. The observed pattern in answer locations emerges naturally from the narrative structure of the novels. C.3 MULTICHOICE DISTRACTORS GENERATION Our pilot study compared distractors generated by both GPT-4 and Claude 2.1. Interestingly, GPT-4 generated slightly more challenging distractors - both models scored approximately only 0.5% lower on GPT-4’s distractors compared to Claude 2.1’s. Thus, we consider the bias caused by the model choice not significant in the distractors generation, and safely chose the choices generated by GPT-4. C.4 EXPERIMENTS AND ANALYSIS C.4.1 IMPLEMENTATIONS We also tried to test Gemini-1.54, but the generation of 816 questions from 29 novels was blocked for unknown reasons (with only 14 of these novels under copyright protection). Consequently, we have decided not to present Gemini’s result. Experiment Configs Given the cost of running long-context APIs, we request that each model respond to all questions for a given book in a single session. To ensure fair comparisons, local- deployed LLMs also answer all questions for a book at once. We set ‘temperature = 0’ to eliminate randomness and keep other hyper-parameters default. C.4.2 EVALUATOR MODEL To prove that the model bias has little or no effect on our final results, we conducted a thorough analysis comparing different evaluator models to check their potential, demonstrated in Table 15. The results show minimal variance between evaluators, with Claude-3.5 being slightly stricter but showing no significant model preference. This consistency is largely due to NovelQA’s objective nature, with well-defined questions and answers that leave little room for evaluator bias. 4https://deepmind.google/technologies/gemini/flash/ 18 ', 'original_lines': 'Figure 6: Analysis of Accuracy in multichoice Setting by Absolute and Relative Token Positions: This figure illustrates the accuracy in the multichoice setting of NovelQA, plotted against the absolute position (left) and percentage position (right) of each question’s relevant evidence within the novel. Each subplot represents a different model. The x-axis of the absolute position figure (left), reflecting token indexes, is folded on the right due to the long-tail distribution in the lengths of the selected novels. C.4.4 MULTICHOICE PERFORMANCE RELATED TO EVIDENCE POSITIONS Figure 6 which presents the relationships between the accuracy and the absolute or relative positions accordingly shows similar trends to which are observed in the generative setting. To be specific, the accuracy by absolute token position remains high when related evidences are before 100K’s text length, while drops after 100K. Meanwhile, the accuracy by relative position remains relatively even. A comparison is made between the accuracies within two ranges, 65K(the lowest token count) to 100K (namely pre-100K) and 100K to the end (namely post-100K). Figure 7 and Table 4 present a clearer contrast between these two ranges, where the precision drops dramatically after the 100K token. C.4.5 MULTI-HOP ACCURACY RELATED TO EVIDENCE DISTANCE We also measured the relationship between the evidence distance within each multi-hop question and the accuracy under the generative task. For each multi-hop question, we obtain the distances among all evidences, and consider the max distance among them as the evidence distance. This can be interpreted as the model must memorize at least one of its evidences for the max distance to meet the final evidence in order to obtain the answer. The correlation between the evidence distance and the accuracy for multi-hop questions is demonstrated in Table 16. Among the indices, Pearson correlation assumes that the two input distributions have linear correlation. Spearman correlation assumes that the two input distributions have monotonic correlation. Kendall correlation assumes 19 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Table 16: Correlation between the distance among multiple evidences in multi-hop questions and the accuracy in the generative scenario. The max distance among all evidence distances are considered as the evidence distance of the question. The overall negative correlations show that the evidence ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'texts, thus revealing distinct comprehension challenges faced by LLMs when processing texts of varying lengths. ', 'modified_lines': '', 'original_lines': 'the outputs such as chain-of-thought prompting can enhance their counting ability. Our test does show that models make mistakes with numbers. Though the errors in the case of generative responses may be due to not following instructions and simply outputting ‘multiple times’ instead of the desired specific times, the accuracy in the multichoice setting has still only reached 38.53% to 49.56% for the chosen four models, as shown in Table 17. Even in the simplest question which asks for the appearing frequency of certain phrases, the models still make mistakes. 22 Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-02 10:34:48
ICLR.cc/2025/Conference
M2ehQeGSdm
WGLMcyyWmc
[]
2025-03-15 06:42:32
ICLR.cc/2025/Conference
WGLMcyyWmc
i2uwNn757k
[{'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'complexity, length, and narrative coherence, making it an ideal tool for assessing deep textual understanding in LLMs. This paper details the design and construc- tion of NovelQA, focusing on its comprehensive manual annotation process and the variety of question types aimed at evaluating nuanced comprehension. Our 1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'However, the evaluation of these models’ long-context abilities remains a challenge due to the limitations of current benchmarks. To address this gap, we introduce NovelQA, a benchmark tailored for evaluating LLMs with complex, extended ', 'modified_lines': 'narratives. Constructed from English novels, NovelQA offers a unique blend of evaluation of long-context LLMs on NovelQA reveals significant insights into their strengths and weaknesses. Notably, the models struggle with multi-hop reasoning, detail-oriented questions, and handling extremely long inputs, with average lengths exceeding 200,000 tokens. Results highlight the need for substantial advancements in LLMs to enhance their long-context comprehension and contribute effectively to computational literary analysis. ', 'original_lines': 'narratives. NovelQA, constructed from English novels, offers a unique blend of evaluation of long-context LLMs on NovelQA reveals significant insights into their strengths and weaknesses. Notably, the models struggle with multi-hop rea- soning, detail-oriented questions, and handling extremely long inputs, averaging over 200,000 tokens. Results highlight the need for substantial advancements in LLMs to enhance their long-context comprehension and contribute effectively to computational literary analysis. ', 'after_paragraph_idx': 2, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': '1 ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'To fill this gap, we introduce NovelQA, a benchmark crafted to specifically evaluate LLMs’ per- formance on texts with averaged context windows exceeding 200,000 tokens. Unlike existing ', 'modified_lines': ' ∗ Equal contribution, paper finished at Westlake University † Correspondence to Yue Zhang ([email protected]) and Qian Wang ([email protected]). ', 'original_lines': 'benchmarks (Shaham et al., 2023; An et al., 2023; Adams et al., 2024), NovelQA addresses the need ∗ Equal Contribution † Corresponding to Yue Zhang ([email protected]) and Qian Wang ([email protected]). ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 5}, {'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'for assessing extremely long-context understanding, offering a refined and comprehensive tool for ad- vancing natural language processing capabilities. We construct NovelQA based on novels in English, which are also ideal for testing long-context modeling because they are long and complex, with plots ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'each question, models are evaluated under two distinct settings – multichoice, where the task is to select the correct answer from four options, and Generative, where the model generates an answer. ', 'modified_lines': 'benchmarks (Shaham et al., 2023; An et al., 2023; Adams et al., 2024), NovelQA addresses the need ', 'original_lines': '', 'after_paragraph_idx': 2, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '1We have released the demonstrations and input of NovelQA, and created a leaderboard. More details can be found in https://novelqa.github.io/. And NovelQA is released under the Apache-2.0 License. For the public access, we have released all constructed data on Huggingface https://huggingface.co/ ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'through human effort of experts. We present a detailed comparison between NovelQA and other benchmarks in Appendix Fig 5. ', 'modified_lines': '', 'original_lines': 'We also discuss related Long-Context Language Modeling methods in Appendix Sec C.1. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 RELATED WORK', 'after_section': '2 RELATED WORK', 'context_after': 'Meaning, Span, Setting, Relation, Character, Plot). Multi-hop Single-hop Detail Sum ', 'paragraph_idx': 17, 'before_section': '2 RELATED WORK', 'context_before': 'procedure is accomplished through gpt-3.5-turbo-16k tokenizer. Table 1: Distribution of Question Types in NovelQA: This table provides a breakdown of questions ', 'modified_lines': 'across different complexity categories (Multi-hop, Single-hop, Detail) and aspect categories (Times, ', 'original_lines': 'across different complexity categories (Multi-hop, Single-hop, Detail) and aspect categories (Time, ', 'after_paragraph_idx': 17, 'before_paragraph_idx': 16}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '3 DATA 3.1 DATASET OVERVIEW ', 'paragraph_idx': 19, 'before_section': None, 'context_before': '591 2305 ', 'modified_lines': 'We also discuss related Long-Context Language Modeling methods in Appendix Sec C.1. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.1 DATASET OVERVIEW', 'after_section': '3.1 DATASET OVERVIEW', 'context_after': 'Book Selection The books in NovelQA contain 65 free public-domain books from the Project (approximately 67k tokens). The distribution of the token count is illustrated in Figure 3. Question Distribution The annotated questions can be classified by the complexity of solving the ', 'paragraph_idx': 20, 'before_section': '3.1 DATASET OVERVIEW', 'context_before': 'a novel text N and a question Qi are combined to send into the model each time, and the generated answer is compared with the answer Ai. In the multichoice setting, the novel N , a question Qi, and the four choices ai,0 to ai,3 are sent into the model, and the output is evaluated according to the gold ', 'modified_lines': 'label ai,gold. The evidences domain consists of either the original excerpts from the novel or the reasoning steps written by the annotator. Gutenberg2 and 24 copyright-protected books purchased from the Internet. The selection of the books follows a criterion to ensure the diversity in the eras, styles, and themes, thus we decided to include a portion of copyrighted novels. All selected books are in English and exceed 50K words ', 'original_lines': 'label ai,gold. Meanwhile, the evidences domain consists of either the original excerpts from the novel or the reasoning steps written by the annotator. Gutenberg2 and 24 copyright protected books purchased from the Internet. The selection of the books follows a criteria to ensure the diversities in the eras, styles, and themes, thus we decided to included a portion of copyrighted novels. All selected books are in English and exceed 50K words ', 'after_paragraph_idx': 21, 'before_paragraph_idx': 20}, {'section': '3.1 DATASET OVERVIEW', 'after_section': '3.1 DATASET OVERVIEW', 'context_after': 'questions in our dataset is displayed in Table 1. We also list the ability tested by each kind of question in Appendix Table 10. Advantages Our NovelQA dataset serves as a new benchmark for evaluating long-context under- standing, distinguished by several key advantages. Firstly, it surpasses existing benchmarks in length, ', 'paragraph_idx': 22, 'before_section': '3.1 DATASET OVERVIEW', 'context_before': 'on, the data entails seven types. A detailed specification of each type is listed in Appendix C.2.2. We have a total of 2305 questions, of which 1640 are from 65 public domain novels, while the remaining 665 are from 24 copyrighted novels. According to the classification above, the distribution of the ', 'modified_lines': ' 2https://www.gutenberg.org/ 4 65876200K300K400K500K600K84721200K300K400K500K600KMin CountMedianMean Published as a conference paper at ICLR 2025 ', 'original_lines': ' 2https://www.gutenberg.org/ 4 65876200K300K400K500K600K84721200K300K400K500K600KMin CountMedianMean Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': 22, 'before_paragraph_idx': 22}, {'section': '3.2 DATA ANNOTATION', 'after_section': '3.2 DATA ANNOTATION', 'context_after': 'Annotator Recruitment Our annotators are predominantly English Language and Literature uni- versity students or those with a keen interest in English novels, recruited from local universities. These students are accustomed to closely reading English novels for coursework or as a hobby, and writing reports or papers on them. Before annotation, each annotator was instructed to read the novel is either annotated by only one individual, or consistent in version across annotators, despite minor variations among different editions. Each annotator contributes to a typically small number of 20-30 questions per novel. This approach avoids forcing annotators to annotate questions on ', 'paragraph_idx': 24, 'before_section': '3.2 DATA ANNOTATION', 'context_before': 'annotation procedure consists of two phases: (1) Template-based phase: The annotators can fill entities into 10+ templates (see Appendix Sec C.2.1) that we design to be related to multi-hop or detailed information. This phase entails half of the data, mainly contributing to the multi-hop ones. ', 'modified_lines': '(2) Free-form phrase: To ensure the diversity of question expression and align the questions to the natural distribution, the second half of our data is annotated without a template, namely, the annotators contribute to any difficult questions that they come up with freely. annotation instructions to understand the requirements and sign to agree to participate. Annotators are allowed to select novels for annotation based on their familiarity, ensuring they have previously read and comprehensively understood the texts. Meanwhile, we make sure the selected books meet our standards of enough word count and well-developed narratives. We also ensure that each selected ', 'original_lines': '(2) Free-formed phrase: To ensure the diversity of question expression and align the questions to the natural distribution, our second half of the data is annotated without a template, namely, the annotators contribute to any difficult questions that they come up with freely. annotation instruction to understand the requirements and sign to agree to participate. Annotators are allowed to select novels for annotation based on their familiarity, ensuring they had previously read and comprehensively understood the texts. Meanwhile, we make sure the selected books meet our standards of enough word count and well-developed narratives. We also ensure that each selected ', 'after_paragraph_idx': 25, 'before_paragraph_idx': 24}, {'section': '3.2 DATA ANNOTATION', 'after_section': '3.2 DATA ANNOTATION', 'context_after': 'Time Consumption and Rewards Given the annotator’s familiarity with their chosen novels and their experience with similar questions in their academic assignments, creating questions based on their knowledge becomes a manageable task within a reasonable time cost. The annotation reward is of $1.11 to $1.39 per tuple. As an average annotator can write 5 to 6 pieces of data at full speed 5 ', 'paragraph_idx': 27, 'before_section': '3.2 DATA ANNOTATION', 'context_before': 'sufficient pre-tests on several LLMs to analyze their possible weaknesses in long-input QA and novel knowledge memorization. Our pre-test shows that they usually fail to tackle information spanning over multiple chapters, as well as lack attention to details that have no contribution to the main theme. ', 'modified_lines': 'We also refer to around fifteen books on novel and narration theories (e.g. Forster, 1927; Tobias, 2012; Schmidt, 2012; McKee, 2005) to ensure our template covers more aspects that a novel can discuss (e.g., character, setting, theme). Templates are ensured to test on facts (e.g., events, entities, numbers) that can be traced back to specific evidences from the books, instead of on any subjective feelings or analysis of the readers. ', 'original_lines': 'Meanwhile, we also refer to around fifteen books on novel and narration theories (e.g. Forster, 1927; Tobias, 2012; Schmidt, 2012; McKee, 2005) to ensure our template covers more aspects that a novel can discuss (e.g., character, setting, theme). Templates are ensured to test on facts (e.g., events, entities, numbers) that can be traced back to specific evidences from the books, instead of on any subjective feelings or analysis of the readers. according to our observation, the $5.56 to $8.34 hourly wage is above the local legal minimum wage of $2.78/hour. The annotation process costs around $3,500. ', 'after_paragraph_idx': 28, 'before_paragraph_idx': 27}, {'section': '3.1 DATASET OVERVIEW', 'after_section': None, 'context_after': 'Quality Control The created data is manually double-checked by three authors of this work. The review procedure follows the criteria of minimizing factual errors, enhancing expression clarity, and maintaining challenges to LLMs. Besides, we ensured that all questions are based on factual Distractions for Multichoice Setting We use GPT-4 to generate three distracting options for each check the data. 4 EXPERIMENTS ', 'paragraph_idx': 20, 'before_section': None, 'context_before': '90.00 ', 'modified_lines': 'according to our observation, the $5.56 to $8.34 hourly wage is above the local legal minimum wage of $2.78/hour. The annotation process costs around $3,500. descriptions and eliminated any subjective ones. Consequently, only 79.4% of the collected data are preserved, resulting in a final dataset of 2305 QA tuples. Meanwhile, we have also conducted the inter-annotator agreement (IAA) test, focusing on evaluating the quality of annotated question- answer pairs. Annotators are required to choose books they are familiar with but have not annotated themselves to answer questions on. As annotators are mostly from the same local university and share similar courses and projects, many can find books they have read in common. We select books read by at least two annotators and have the other reader answer multichoice questions. As the respondents are quite familiar with the target novels and have a strong academic background in English Literature, it takes them around 2-3 hours or less to complete each novel. The IAA test shows a score of 94.6% in Cohen’s Kappa, indicating a high agreement among annotators. question and its golden answer, and randomly permute the four answers.3 We manually checked those distractions and rewrote those with similar meaning with the golden answers when we double ', 'original_lines': 'descriptions and eliminated any subjective ones. Consequently, only 79.4% of the collected data are preserved, resulting a final dataset of 2305 QA tuples. Meanwhile, we have also conducted the inter- annotator agreement (IAA) test, focusing on evaluating the quality of annotated question-answer pairs. Annotators are required to choose books they are familiar with but have not annotated themselves to answer questions on. As annotators are mostly from the same local university and share similar courses and projects, many can find books they have read in common. We select books read by at least two annotators and have the other reader answer multichoice questions. As the respondents are quite familiar with the target novels and have a strong academic background in English Literature, it takes them around 2-3 hours or less to complete each novel. The IAA test shows a score of 94.6% in Cohen’s Kappa, indicating a high agreement among annotators. question and its golden answer and randomly permute the four answers.3 We checked those distrac- tions and rewrote those with similar meaning with the golden answers manually when we double ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '3Our pilot study compared distractors generated by both GPT-4 and Claude 2.1. Interestingly, GPT-4 generated slightly more challenging distractors - both models scored approximately 0.5% lower on GPT-4’s ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Prompts We use uniform prompts for all LLMs, with a start part, novel content, questions, choices in the multichoice setting, and an end part. The prompt structure is shown in Appendix Table 11. ', 'modified_lines': '', 'original_lines': ' Truncation Due to input length limitations, we truncated the novel content from the end to the front following Bai et al. (2023); Li et al. (2023a); An et al. (2023), to meet with the max input length, while keeping questions and other prompts complete. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1', 'after_section': '4.1', 'context_after': 'Evaluating Generative Results Previous researches have proved the ability of LLMs in evaluating the machine-generated answers align with human judgements (Wang et al., 2023; An et al., 2023; Li et al., 2023a) After a pilot study showing that the models show no several preference towards the answers generated by its own kinds (See Appendix C.4.2, we choose GPT-4 to evaluate our generated results. We further conducted a human evaluation on 800 pieces of generative outputs and carried out an inter-evaluator agreement (IEA) test between two human evaluators and the GPT-4 evaluator. In evaluations were compared to those of GPT-4 evaluators, with Cohen’s kappa score calculated to agreement towards the GPT-4 evaluating results. NovelQA primarily consists of objective questions, which have clear, verifiable answers with a factual basis in the text. This objectivity significantly reduces the impact of model bias in LLM-as-Judge evaluation ', 'paragraph_idx': 35, 'before_section': None, 'context_before': '- - ', 'modified_lines': 'Truncation Due to input length limitations, we truncated the novel content from the end to the front following Bai et al. (2023); Li et al. (2023a); An et al. (2023), to meet with the max input length, while keeping questions and other prompts complete. the IEA test, annotators also serve as human evaluators of the novels they were familiar with. Their measure agreement. As shown in Table 7, the result of 89.25% in Cohen’s Kappa indicates a high ', 'original_lines': 'the IEA test, annotators also serve as human evaluators to the novels they were familiar with. Their measure agreement. As showing in Table 7, the result of 89.25% in Cohen’s Kappa indicates a high ', 'after_paragraph_idx': 36, 'before_paragraph_idx': None}, {'section': '4.1', 'after_section': None, 'context_after': '4.2 MAIN RESULTS ', 'paragraph_idx': 40, 'before_section': '4.1', 'context_before': 'which is only compatible with several LLMs. Therefore, we choose InternLM2-Chat-7b-200K, InternLM2-Chat-20b-200K, Llama-3.1-8b, and Llama-3.1-70b for our experiments. ', 'modified_lines': 'Human Performance We selected books that have been read by multiple annotators, and then have had the readers engage in a two-round answering process on novels they had not previously annotated. The first round was in a generative setting, and the second round was in a multichoice setting. This process was conducted on 5 novels with a total of 100 questions. The result shows that human performance scored 90 in the generative setting and 97 in the multiple-choice setting. ', 'original_lines': 'Human Performance We selected books that have been read by multiple annotators, and then have had the readers engaged in a two-round answering process on novels they had not previously annotated. The first round was in a generative setting, and the second round was in a multichoice setting. This process was conducted on 5 novels with a total of 100 questions. The result shows that human performance scored 90 in the generative setting and 97 in the multiple-choice setting. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 39}, {'section': '4.2 MAIN RESULTS', 'after_section': '4.2 MAIN RESULTS', 'context_after': 'All models show a drop in performance in the generative setting compared to the multichoice setting. 7 ', 'paragraph_idx': 41, 'before_section': '4.2 MAIN RESULTS', 'context_before': 'in long-context understanding compared with human readers. This is especially true in the generative setting where understanding and recall over long contexts are more challenging. Additionally, commercial models (Claude-3.5-Sonnet and GPT-4o-mini) outperform open-source models in this ', 'modified_lines': 'benchmark. Among closed-source models, LLaMA family models outperform the InternLM models. ', 'original_lines': 'benchmark. Among closed-source models, LLama family models outperform the InternLM models. This indicates the increased challenge in generating a correct answer from scratch, as opposed to selecting from provided options. ', 'after_paragraph_idx': 41, 'before_paragraph_idx': 41}, {'section': '4.1', 'after_section': None, 'context_after': '4.3 RESULTS BY THE QUESTION TYPE ', 'paragraph_idx': 35, 'before_section': None, 'context_before': 'percentage position (right) of each question’s relevant evidences. The x-axis of the absolute token position figure (left), reflecting token indexes, is folded on the right due to the long-tails. ', 'modified_lines': 'This indicates the increased challenge in generating a correct answer from scratch, as opposed to selecting from provided options. We appended a typical error analysis in Appendix Sec C.4.6. ', 'original_lines': 'We have also observed three typical errors, hallucination, overlooking, and miscounting, a detailed analysis is conducted in Appendix Sec C.4.6. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': '4 EXPERIMENTS', 'context_after': 'InternLM2-Chat-7b InternLM2-Chat-20b ', 'paragraph_idx': 32, 'before_section': None, 'context_before': 'Sufficiency Avg. GPT-4 ', 'modified_lines': 'Claude 2.1 ', 'original_lines': 'Cluade 2.1 ', 'after_paragraph_idx': 32, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'retrieving specific information from lengthy contexts, especially for lengths are 100,000. Moreover, operating LLMs on inputs exceeding 200,000 tokens faces technical challenges, notably in terms of memory requirements and associated costs. ', 'paragraph_idx': 9, 'before_section': None, 'context_before': 'We introduced NovelQA, a long-range benchmark designed to assess the long-context comprehension abilities of LLMs. Utilizing representative English novels, NovelQA presents LLMs with the challenge of navigating complex, real-world texts. Our evaluations reveal that both commercial and ', 'modified_lines': 'open-source models face challenges with detailed understanding, multi-hop reasoning, and accurately ', 'original_lines': 'open-source models face difficulties with detailed understanding, multi-hop reasoning, and accurately ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-31 03:25:56
ICLR.cc/2025/Conference
uqSYMqhUJN
nQjlCp4RWF
[{'section': 'Abstract', 'after_section': None, 'context_after': '• We show that a simple metric such as the MSE is enough for assessing the redundancy of ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '• We provide a comprehensive analysis of internal representation similarities across various pretrained foundation models, revealing consistent patterns between blocks within each ', 'modified_lines': 'architecture, independent of the dataset (Figures 2 and 8 to 9). ', 'original_lines': 'architecture, independent of the dataset (Figures 2 and 7 to 8). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '2 RELATED WORK ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '• We validate our method on vision-based classification tasks using diverse pretrained models and datasets, demonstrating its applicability and effectiveness across different architectures ', 'modified_lines': 'and datasets (Tables 1, 7 and 8). ', 'original_lines': 'and datasets (Tables 1, 6 and 7). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 REDUNDANT BLOCKS APPROXIMATION', 'after_section': '3 REDUNDANT BLOCKS APPROXIMATION', 'context_after': 'any downstream linear classifier on top of the simplified model for the desired task, retaining the original architecture’s overall structure while significantly decreasing the number of parameters and 4 EXPERIMENTS ', 'paragraph_idx': 22, 'before_section': '3 REDUNDANT BLOCKS APPROXIMATION', 'context_before': 'approximation, using the transformation matrix T to bypass these blocks. This process reduces model parameters and computational complexity with minimal impact on the ', 'modified_lines': 'resulting representations, as shown in Figures 3, 4 and 10 to 13. Additionally, it is possible train computation costs, as shown in Tables 1, 2 and 7 to 9. ', 'original_lines': 'resulting representations, as shown in Figures 3, 4 and 9 to 12. Additionally, it is possible train computation costs, as shown in Tables 1, 2 and 6 to 8. ', 'after_paragraph_idx': 22, 'before_paragraph_idx': 21}, {'section': '4.1 BLOCK SIMILARITIES', 'after_section': '4.1 BLOCK SIMILARITIES', 'context_after': 'Results and Analysis. Figure 2 presents the cosine similarity matrices between blocks of the ViT-B and DiNO-S models on MNIST and CIFAR-100. These matrices illustrate the internal block-by- block similarities within each architecture. Our results reveal that while the patterns of similarity vary ', 'paragraph_idx': 24, 'before_section': '4.1 BLOCK SIMILARITIES', 'context_before': 'representation. This ensures that the analysis remains aligned with the key components of the model’s final predictions. This flexibility enables the method to adapt to different model architectures and tasks, where tokens other than the [CLS] may hold more relevant information. Model and dataset ', 'modified_lines': 'details can be found in Table 5 and Table 6, respectively. ', 'original_lines': 'details can be found in Table 4 and Table 5, respectively. ', 'after_paragraph_idx': 24, 'before_paragraph_idx': 24}, {'section': '4.1 BLOCK SIMILARITIES', 'after_section': '4.1 BLOCK SIMILARITIES', 'context_after': 'where wide and deep trained from scratch models tend to exhibit a distinctive “block structure” in their representations, linked to model overparameterization. Our results extend this observation by showing that block structures also emerge in pretrained foundation models, with their presence Takeaway. The representation patterns generated by pretrained models are primarily determined by the architecture, and remain consistent across different datasets. ', 'paragraph_idx': 24, 'before_section': None, 'context_before': 'tions of different blocks using the Classify token ([CLS]) token, providing insights into redundancy in foundation pretrained models. The matrices reveal that the similarity structure between computational blocks is predominantly influenced by the model architecture itself, rather than the specific dataset. ', 'modified_lines': 'Please refer to Figures 8 and 9 for additional results using other metrics and models. primarily dependent on the architecture. Please refer to Figures 8 and 9 for additional results. ', 'original_lines': 'Please refer to Figures 7 and 8 for additional results using other metrics and models. primarily dependent on the architecture. Please refer to Figures 7 and 8 for additional results. ', 'after_paragraph_idx': 24, 'before_paragraph_idx': None}, {'section': '4.2 REDUDANT BLOCK APPROXIMATION', 'after_section': None, 'context_after': 'Original ', 'paragraph_idx': 28, 'before_section': '4.2 REDUDANT BLOCK APPROXIMATION', 'context_before': 'Quantitative Analysis. As illustrated in Figure 3, in most cases, the BR decreases as the block depth increases. This suggests that approximating the final blocks would lead to significant changes in the final representations, indicating their critical role in maintaining similar final representations. ', 'modified_lines': 'However, in the case of DEiT-S, the trend is reversed. Here, the BR is higher in the central blocks and lower in the initial ones. This is confirmed by the dissimilarity between the last-layer representations, which increases when the earlier blocks are removed in DEiT-S, whereas the opposite is observed in other models. These findings reinforce the intuition behind the BR metric, demonstrating a correlation between BR and the final representation similarity when approximating blocks. In some instances, such as with the MNIST dataset, the BR scores remain relatively consistent across blocks, indicating that the representations are largely similar one to another. However, for more complex datasets like CIFAR-100, the representations in the final or in the first blocks become increasingly dissimilar, making it advantageous to approximate intermediate blocks. This suggests that the BR metric is influenced not only by the architecture but also by the complexity of the dataset, allowing for targeted approximations that reduce model parameters and complexity without significantly compromising performance. ', 'original_lines': 'However, in the case of DEiT-S, the trend is reversed. Here, the BR is higher in the central blocks and lower in the initial ones. This is confirmed by the dissimilarity (MSE) between the last-layer representations, which increases when the earlier blocks are removed in DEiT-S, whereas the opposite is observed in other models. These findings reinforce the intuition behind the BR metric, demonstrating a correlation between BR and the final representation similarity when approximating blocks. In some instances, such as with the MNIST dataset, the BR scores remain relatively consistent across blocks, indicating that the representations are largely similar one to another. However, for more complex datasets like CIFAR-100, the representations in the final or in the first blocks become increasingly dissimilar, making it advantageous to approximate intermediate blocks. This suggests that the BR metric is influenced not only by the architecture but also by the complexity of the dataset, allowing for targeted approximations that reduce model parameters and complexity without significantly compromising performance. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 28}, {'section': '4.2 REDUDANT BLOCK APPROXIMATION', 'after_section': None, 'context_after': 'Qualitative Analysis. To further investigate the relationship between BR and representation (dis)similarity, Figure 4 and Figure 5 show the PCA projection of the final block’s representations in ', 'paragraph_idx': 33, 'before_section': '4.2 REDUDANT BLOCK APPROXIMATION', 'context_before': 'that in this model, the last layer representations are crucial, making it more effective to approximate earlier blocks instead. Note that for CIFAR-100 (bottom right), only the overall structure of the space can be observed, as the 100 classes make it challenging to distinguish labels based on color. ', 'modified_lines': 'For further results approximating other blocks and using other encoders, refer to Figures 10 to 12. ', 'original_lines': 'For further results approximating other blocks and using other encoders, refer to Figures 9 to 11. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 33}, {'section': '4.2 REDUDANT BLOCK APPROXIMATION', 'after_section': None, 'context_after': '6 ', 'paragraph_idx': 30, 'before_section': '4.2 REDUDANT BLOCK APPROXIMATION', 'context_before': 'plots visualize the representations generated using the DiNO-S and DEiT-S pretrained encoders across the MNIST, F-MNIST, CIFAR-10, and CIFAR-100 datasets. For CIFAR-10, having 100 classes, only the overall structure of the representation space is visible, making it difficult to ', 'modified_lines': 'distinguish individual labels by color. In Figure 4, approximating the final block results in noticeable deviations from the original representations, while in Figure 5, the approximated representation ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 30}, {'section': 'Abstract', 'after_section': None, 'context_after': 'remains similar to the original one. This observation aligns with the results from Figure 3, where Original ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '−20−1001020−20−1001020−20−100102030−20−15−10−505101520−20−100102030−20−100102030−30−20−100102030−20−1001020−20−1001020−30−20−1001020−20−100102030−30−20−1001020−20−10010−20−15−10−505101520−30−20−1001020−30−20−1001020 Under review as a conference paper at ICLR 2025 ', 'modified_lines': 'approximating the appropriate block can lead to significant changes in representations. Finally, in Figure 7, we present an ablation study on various similarity metrics, analyzing their correlation with downstream accuracy. The results demonstrate that the BR metric is particularly effective in identifying the optimal blocks for approximation. For additional visualizations, please refer to Figures 10 to 13. ', 'original_lines': 'distinguish individual labels by color. In Figure 4, approximating the final block results in noticeable deviations from the original representations, while in Figure 5, the approximated representation approximating the appropriate block can lead to significant changes in representations. For additional visualizations, please refer to Figures 9 to 12. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Takeaway. Approximating redundant blocks effectively reduces model parameters and complexity without significantly compromising representation fidelity. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'offering potential performance improvements while reducing model complexity and parameter count. Note that for CIFAR-100 (bottom right), only the overall structure of the space can be observed, as the 100 classes make it challenging to distinguish labels based on color. For further results ', 'modified_lines': 'approximating other blocks and using other encoders, refer to Figures 10 to 12. ', 'original_lines': 'approximating other blocks and using other encoders, refer to Figures 9 to 11. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Encoder ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'represents the block whose output is used to approximate the second block’s output. The ”Num. Blocks” column indicates the total number of remaining blocks after the approximation, and the ”Num. Params” column shows the number of model parameters. The proposed method preserves ', 'modified_lines': 'performance while reducing the number of parameters. Please refer to Table 7 for the results on all the models and datasets, as well as Table 8. ', 'original_lines': 'performance while reducing the number of parameters. Please refer to Table 6 for the results on all the models and datasets, as well as Table 7. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '4 11 3 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '10 3 ', 'modified_lines': '', 'original_lines': '5 10 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '4 11 3 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '10 3 ', 'modified_lines': '', 'original_lines': '5 10 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '4 11 3 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '10 3 ', 'modified_lines': '', 'original_lines': '5 10 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '2 9 2 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '7 1 ', 'modified_lines': '', 'original_lines': '3 8 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '2 9 2 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '7 1 ', 'modified_lines': '', 'original_lines': '3 8 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '2 9 2 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '7 1 ', 'modified_lines': '', 'original_lines': '3 8 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '→ → → ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '→ → ', 'modified_lines': '', 'original_lines': '→ → ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '→ → ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '→ → → ', 'modified_lines': '', 'original_lines': '→ → ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '→ → ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '→ → → ', 'modified_lines': '', 'original_lines': '→ → ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '11 11 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '10 10 10 ', 'modified_lines': '', 'original_lines': '10 10 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '11 11 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '10 10 10 ', 'modified_lines': '', 'original_lines': '10 10 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '11 11 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '10 10 10 ', 'modified_lines': '', 'original_lines': '10 10 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '20.19M 20.19M ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '18.56M 18.56M 18.56M ', 'modified_lines': '', 'original_lines': '18.56M 18.56M ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '20.43M 20.43M ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '18.80M 18.80M 18.80M ', 'modified_lines': '', 'original_lines': '18.80M 18.80M ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '20.19M 20.19M ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '18.56M 18.56M 18.56M ', 'modified_lines': '', 'original_lines': '18.56M 18.56M ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '95.37 94.77 95.76 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '94.91 95.67 ', 'modified_lines': '', 'original_lines': '95.16 95.27 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '96.54 92.46 96.99 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '96.93 96.74 ', 'modified_lines': '', 'original_lines': '96.93 97.03 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '95.68 95.64 95.99 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '95.81 95.35 ', 'modified_lines': '', 'original_lines': '95.86 95.87 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '0.10 0.08 0.11 0.17 0.44 84.93 90.97 85.81 92.09 93.03 89.16 94.87 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0.19 0.08 ', 'modified_lines': '', 'original_lines': '0.08 0.58 ± ± ± ± ± ± ± ± ± ± ± ± 94.18 91.56 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '0.12 0.24 0.19 0.09 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0.22 0.26 ', 'modified_lines': '', 'original_lines': '0.07 0.23 ± ± ± ± ± ± ± ± ± ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.2 REDUDANT BLOCK APPROXIMATION', 'after_section': None, 'context_after': '8 ', 'paragraph_idx': 30, 'before_section': None, 'context_before': 'the ImageNet1k dataset. As shown in the leftmost correlation matrix and highlighted in green in the table, approximating redundant blocks yields comparable results while reducing both the number of parameters and computational cost. Additionally, the rightmost correlation matrix, along with ', 'modified_lines': 'the results highlighted in violet in the table, demonstrates that approximating four redundant blocks yields better results than approximating three non-redundant blocks. Overall, performance remains similar or improved, demonstrating that a simple linear transformation is sufficient to approximate different blocks of a NN, significantly reducing the number of parameters and model complexity. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'It’s important to note that this transformation is uniformly applied to all tokens, further optimizing the process, with no additional training or fine-tuning required afterward. Additional results on Table 2: Image Classification Performance: RBA vs. Skip Across Seeds. Accuracy scores for ViT-S on CIFAR-10 and CIFAR-100F are reported using 3 different seeds. The ”Approx.” ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2025 ', 'modified_lines': 'classification performance can be found in Table 7. ', 'original_lines': 'the results highlighted in violet in the table, demonstrates that approximating four redundant blocks yields better results than approximating three non-redundant blocks. Overall, performance remains similar or improved, demonstrating that a simple linear transformation is sufficient to approximate different blocks of a NN, significantly reducing the number of parameters and model complexity. classification performance can be found in Table 6. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '11 11 12 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '10 10 10 ', 'modified_lines': '', 'original_lines': '10 10 11 11 11 11 11 11 11 11 11 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '71.56 89.65 81.24 93.40 95.87 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '73.94 66.27 ', 'modified_lines': '', 'original_lines': '74.79 85.74 70.90 83.21 88.25 86.23 83.42 87.57 88.70 89.98 93.77 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '1.62 0.52 0.48 0.32 0.08 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0.34 0.76 ', 'modified_lines': '', 'original_lines': '1.56 0.32 0.09 0.52 0.48 0.23 0.63 0.52 0.24 0.46 0.69 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '± ± ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '± ± ± ', 'modified_lines': ' ', 'original_lines': '± ± ± ± ± ± ± ± ± ± ± ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '50.19 70.75 60.22 76.32 81.29 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '45.00 42.76 ', 'modified_lines': '', 'original_lines': '54.62 63.79 47.54 62.23 69.79 66.69 61.96 68.70 69.33 71.80 78.68 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '0.38 0.39 0.75 0.30 0.20 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0.31 0.75 ', 'modified_lines': '', 'original_lines': '0.52 0.66 0.37 0.21 0.02 0.48 0.55 0.31 0.39 0.22 0.29 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '± ± ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '± ± ± ', 'modified_lines': ' ', 'original_lines': '± ± ± ± ± ± ± ± ± ± ± ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '93.03 89.16 94.87 94.23 95.87 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '85.81 92.09 ', 'modified_lines': '', 'original_lines': '94.18 91.56 93.67 93.81 95.10 95.43 95.09 94.73 94.77 94.04 93.68 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '0.10 1.10 0.20 0.12 0.08 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '1.03 0.30 ', 'modified_lines': '', 'original_lines': '0.11 0.72 0.27 0.18 0.23 0.25 0.21 0.13 0.17 0.29 0.65 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '± ± ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '± ± ± ', 'modified_lines': ' ', 'original_lines': '± ± ± ± ± ± ± ± ± ± ± ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '74.65 68.25 79.16 76.69 81.52 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '60.33 72.13 ', 'modified_lines': '', 'original_lines': '76.45 69.35 76.53 77.21 79.57 79.86 79.48 78.27 78.18 77.88 77.47 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '0.59 0.57 0.43 0.36 0.15 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0.85 0.37 ', 'modified_lines': '', 'original_lines': '0.23 0.22 0.33 0.12 0.43 0.20 0.46 0.12 0.17 0.20 0.17 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Encoder ViT-S 5 CONCLUSION ', 'paragraph_idx': 8, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2025 ', 'modified_lines': 'consistently demonstrate that it is possible to leverage a simple linear transformation that is not only shared across all tokens but also across different datasets. Additional results can be found in Table 10. Transformation Ablation. Finally, we conducted an ablation study on the transformations used to approximate latent spaces. The results, presented in Table 4, show accuracy scores for ViT-S on ImageNet1k using the proposed method (RBA) alongside two more complex MultiLayer Perceptron (MLP) translators, referred to as Res-MLP and MLP. Details on these translators are provided in Appendix A.2.1. Both the MLP and Res-MLP translators are trained for 300 steps using a learning rate of 1e-3 and the Adam optimizer. The findings demonstrate that employing a simple linear transformation to approximate redundant layers is the optimal choice in most cases. As expected, the more blocks are approximated, the less linearly correlated they become, making a more complex approximation more effective (see 1 → 5 in Table 4). Furthermore, the Res-MLP and MLP translators require additional training, whereas the RBA approach is entirely training- and fine-tuning-free, as it relies on a closed-form linear transformation. This process eliminates the need for gradient computation or backpropagation. Table 4: Transformation Ablation. Classification accuracy scores when approximating using RBA or using a more complex MLP on ImageNet1k using ViT-S accross three seeds. The ”Approx” column bi → bi + n specifies the blocks used for approximation, where the first value represents the block whose output is used to approximate the second block’s output. Approx. RBA Accuracy ↑ MLP Res-MLP 1 2 7 5 → 5 10 → → 1 3 2 8 9 → → → → → 2 3 4 9 → → → → 3 5 4 10 11 3 4 5 10 43.68 60.41 33.77 65.31 68.16 67.81 46.75 46.17 71.74 71.70 71.49 61.11 ± ± ± ± ± ± ± ± ± ± ± ± 0.36 45.79 0.06 0.44 0.14 0.16 0.15 0.21 0.25 0.29 0.28 0.23 0.15 60.22 22.85 65.45 66.28 67.30 38.29 34.70 71.25 70.78 69.47 53.78 0.19 45.44 0.08 0.10 0.31 0.43 0.12 0.72 0.68 0.19 0.42 0.18 0.19 60.02 33.01 64.54 67.38 66.91 44.97 39.01 70.94 70.78 70.86 58.06 ± ± ± ± ± ± ± ± ± ± ± ± 0.12 0.34 0.76 0.25 0.14 0.09 0.60 0.34 0.18 0.10 0.10 0.43 ± ± ± ± ± ± ± ± ± ± ± ± Takeaway. Redundant Block Approximation preserves essential representational features while maintaining the model’s structural integrity, even when simplifying its architecture, whereas just skipping blocks could lead to performance degradation. ', 'original_lines': 'Table 3: Generalization Results. Classification accuracy scores when approximating using a transformation calculated on other datasets for ViT-S and DiNO-S using MNIST, CIFAR-10, CIFAR-100C and CIFAR-100F. CIFAR-100C refers to CIFAR-100 with the coarse setting (20 labels), while CIFAR-100F with the fine setting (100 labels). The ”Approx” column bi → bi + n specifies the blocks used for approximation, where the first value represents the block whose output is used to approximate the second block’s output. The ”Fit On” column indicates the dataset on which is calculated the linear transformation. Please refer to Table 9 for complete results. Approx. 2 → 3 3 → 4 4 → 5 1 → 3 3 → 5 DiNO-S 2 → 3 3 → 4 4 → 5 1 → 3 3 → 5 Fit On MNIST CIFAR-10 CIFAR-100 MNIST CIFAR-10 CIFAR-100 MNIST CIFAR-10 CIFAR-100 MNIST CIFAR-10 CIFAR-100 MNIST CIFAR-10 CIFAR-100 MNIST CIFAR-10 CIFAR-100 MNIST CIFAR-10 CIFAR-100 MNIST CIFAR-10 CIFAR-100 MNIST CIFAR-10 CIFAR-100 MNIST CIFAR-10 CIFAR-100 MNIST CIFAR-10 CIFAR-100C CIFAR-100F Accuracy ↑ 94.11 89.58 89.63 93.52 88.02 88.21 93.96 78.36 80.11 92.79 80.41 81.24 88.22 61.68 64.18 93.04 86.16 86.39 92.33 84.70 83.72 91.64 70.87 71.51 90.60 78.51 79.80 87.54 63.66 64.26 57.13 95.08 95.00 10.36 95.18 94.82 38.40 95.31 94.98 16.17 90.63 89.98 15.17 93.57 92.77 58.24 94.11 93.78 62.78 94.37 94.10 57.39 93.65 92.98 22.30 89.72 89.28 24.55 87.17 84.40 41.89 85.32 85.50 8.97 86.14 85.92 25.56 85.84 86.01 11.09 75.59 76.27 8.52 80.24 80.56 37.95 82.37 82.28 38.18 81.93 82.02 36.97 80.38 79.96 11.76 74.58 74.75 11.93 66.16 66.43 28.50 77.92 77.74 3.09 78.52 78.09 16.52 78.20 78.14 3.84 65.98 66.26 2.03 71.76 72.43 27.62 75.26 75.29 27.52 74.69 74.59 26.02 73.84 73.54 5.47 65.04 64.92 6.67 58.36 58.51 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Shashanka Venkataramanan, Amir Ghodrati, Yuki M Asano, Fatih Porikli, and Amir Habibian. Skip- attention: Improving vision transformers by paying less attention. In The Twelfth International ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Lucrezia Valeriani, Diego Doimo, Francesca Cuturello, Alessandro Laio, Alessio Ansuini, and Alberto Cazzaniga. The geometry of hidden representations of large transformer models. Advances in Neural Information Processing Systems, 36, 2024. ', 'modified_lines': '', 'original_lines': ' 12 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-26 18:35:13
ICLR.cc/2025/Conference
A88bkb9VsB
sZ5csC3VHX
[]
2025-03-07 21:06:20
ICLR.cc/2025/Conference
cXrlM3kSws
hYoDJzSqLs
[{'section': 'Abstract', 'after_section': None, 'context_after': 'Figure 1: Examples of 2D slices of 3D medical CT images (the first row), the ground truth masks of their pathological regions (the second row) and the anomaly maps predicted by fully self-supervised SCREENER for pathology segmentation (the third row). Note that, the second image from the left contains pneumothorax, missed by ground truth annotation mask, but detected by SCREENER. 1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ABSTRACT ', 'modified_lines': 'Accurate and automated anomaly segmentation is critical for assisting clinicians in detecting and diagnosing pathological conditions, particularly in large-scale medical imaging datasets where manual annotation is not only time- and resource- intensive but also prone to inconsistency. To address these challenges, we propose SCREENER, a fully self-supervised framework for visual anomaly segmentation, leveraging self-supervised representation learning to eliminate the need for man- ual labels. Additionally, we model the conditional distribution of local image patterns given their global context, enabling the identification of anomalies as pat- terns with low conditional probabilities and assigning them high anomaly scores. SCREENER comprises three components: a descriptor model that encodes lo- cal image patterns into self-supervised representations invariant to local-content- preserving augmentations; a condition model that captures global contextual in- formation through invariance to image masking; and a density model that esti- mates the conditional density of descriptors given their global contexts to compute anomaly scores. We validate SCREENER by training a fully self-supervised model on over 30,000 3D CT images and evaluating its performance on four large-scale test datasets comprising 1,820 3D CT scans across four chest and abdominal pathologies. Our framework consistently outperforms existing unsupervised anomaly segmentation methods. Code and pre-trained models will be made publicly available. ', 'original_lines': 'In this paper we present a fully self-supervised framework for visual anomaly seg- mentation and apply it to pathology segmentation in 3D medical CT images. The core idea behind our framework is to learn conditional distribution of local image patterns given their global context. Thus, image patterns that have low conditional probability are assigned high anomaly scores. To this end, we propose SCREENER comprised of descriptor, condition, and density models. The descriptor model en- codes local image patterns into dense self-supervised representations. We enforce these descriptors to discriminate different image positions and remain invariant w.r.t. image augmentations that preserve local content. The condition model pro- duces auxiliary dense image representations, dubbed conditions. We ensure that conditions encode the global contexts of individual image positions, by enforc- image masking. The density model learns the ing them to be invariant w.r.t. conditional density of descriptors for each given condition and produces anomaly segmentation scores. We use this framework to train a fully self-supervised model for pathology segmentation on more than 30,000 3D CT images. Empirical study shows that SCREENER outperforms the existing unsupervised anomaly segmen- tation methods on four large-scale test CT datasets containing a total of 1820 3D images with four chest and abdominal pathologies. The code an the pre-trained models are available at link1 1Link will be available after the reviewing process. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '2 Under review as a conference paper at ICLR 2025 108 109 ', 'paragraph_idx': 3, 'before_section': None, 'context_before': 'INTRODUCTION ', 'modified_lines': 'The accurate and automated segmentation of pathologies in medical computed tomography (CT) im- ages is crucial for assisting clinicians in diagnosing and treating various conditions. However, devel- oping supervised models for pathology segmentation faces significant challenges: labeled datasets are scarce, annotations often cover only a limited range of findings, and manual labelling is not only resource-intensive but also inconsistent. For example, in Figure 1, pneumothorax is present in the second column (black region framed by red box) but is not included in the ground truth mask. Hence, supervised methods for pathology segmentation are often constrained in scope and applicability. In contrast, large-scale datasets of unlabelled CT images are readily available through public repos- itories (Team, 2011; Ji et al., 2022; Qu et al., 2024). These datasets remain largely underutilized due to the lack of annotations, despite their potential to enable fully unsupervised learning approaches. Leveraging this abundance of unlabelled data, we aim to develop a model capable of distinguishing pathological regions from normal ones without requiring labeled training data. Our core assump- tion is that pathological patterns are significantly rarer than healthy patterns in random CT images. This motivates framing pathology segmentation as an unsupervised visual anomaly segmentation (UVAS) problem, where anomalies correspond to pathological regions. While existing UVAS methods have been explored extensively for natural images, their adaptation to medical imaging remains challenging. A major hurdle is that most CT datasets contain unan- notated pathological regions, and there is no automatic way to filter these out to ensure a training set composed entirely of normal (healthy, non-pathological) images — a common requirement for synthetic-based (Zavrtanik et al., 2021; Marimont & Tarroni, 2023) and reconstruction-based (Baur et al., 2021; Schlegl et al., 2019) UVAS methods. Density-based approaches (Gudovskiy et al., 2022; Zhou et al., 2024), which assume anomalies are rare rather than entirely absent, are better suited for this setting, as they can handle training datasets with unannotated pathological regions. These methods model normal patterns probabilis- tically and assign higher anomaly scores to deviations. However, they rely on encoders pre-trained on ImageNet (Deng et al., 2009), optimized for natural images and not for the unique structures and textures in medical CT images. This domain shift leads to suboptimal feature representations failing to capture subtle pathological variations, reducing their effectiveness in medical settings. To address these challenges, we propose SCREENER, a framework that enhances density-based UVAS through domain-specific self-supervised learning and learned contextual conditioning. To avoid domain shift issues and labelling requirement, we pre-train self-supervised encoders (O Pin- heiro et al., 2020; Wang et al., 2021; Bardes et al., 2022; Goncharov et al., 2023) to produce dense CT-specific feature maps. We further introduce a second self-supervised encoder that gen- erates masking-invariant representations, capturing global context without being influenced by local anomalies. Finally, we train a conditional density model to predict the feature maps of one encoder based on the outputs of the other. Anomaly scores are assigned to image regions with high prediction errors, enabling effective segmentation of pathological regions. We demonstrate the effectiveness of SCREENER by training it on over 30,000 3D CT volumes span- ning chest and abdominal regions and evaluating its performance on four large-scale test datasets comprising 1,820 scans with diverse pathologies. As shown in Figure 1, our model successfully segments pathological regions across different organs and conditions. We summarize the key con- tributions of this work: • Self-Supervised Representations for UVAS: We demonstrate that dense self-supervised representations outperform supervised feature extractors in visual anomaly segmentation, enabling a fully self-supervised framework applicable in domains with limited labeled data. • Learned Conditioning Variables: We introduce self-supervised condition variables for density-based models, simplifying the estimation of conditional distributions and achieving remarkable segmentation performance using a simple Gaussian density model. • First Large-Scale Study of UVAS in 3D CT Images: This work presents the first large- scale evaluation of UVAS methods for 3D CT images, showing state-of-the-art performance on unsupervised semantic segmentation of pathologies in diverse anatomical regions, in- cluding lung cancer, pneumonia, liver and kidney tumors. ', 'original_lines': 'Medical computed tomography (CT) images allow radiologists to look inside the patient’s body and detect pathologies based on certain image patterns. Figure 1 shows 2D slices of 3D CT images (first row) and their pathological regions (second row). The last image is an example of a healthy slice. Labeled datasets of CT images are scarce and contain annotations of only few classes of find- ings, while many other pathologies remain unlabeled. That is why supervised models for pathology detection usually have limited functionality. Unlabeled CT images are much more available: there are large-scale public datasets (Team, 2011; Ji et al., 2022; Qu et al., 2024) that do not contain any labels or text annotations and usually re- main totally unused for training. This work is an attempt to use these datasets for training a fully- unsupervised semantic segmentation model that discriminates any pathological image regions from normal ones. Our core assumption is that individual pathological patterns are much more rare than individual healthy patterns in random CT images. Based on this assumption, we treat pathology semantic segmentation as unsupervised visual anomaly segmentation (UVAS) problem. The existing UVAS methods are well explored on natural images, in the setup when all training images are guaranteed to be normal. However, their applicability to unsupervised pathology seg- mentation in CT images remains unclear. One of the obstacles may be that the available training CT datasets contain a lot of images with unannotated pathologies and there is no way to automatically filter them out. This may negatively affect the quality of synthetic-based (Zavrtanik et al., 2021; Marimont & Tarroni, 2023) and reconstruction-based (Baur et al., 2021; Schlegl et al., 2019) UVAS methods. Density-based methods (Gudovskiy et al., 2022; Zhou et al., 2024) are more suitable for this scenario because they only assume that anomalies are rare in the training dataset. However, the existing density-based methods rely on image encoders pre-trained on ImageNet, and their quality may drop when applying them to medical images due to a large domain shift. In this work, we introduce a modification of the density-based UVAS framework. To obtain infor- mative representations, instead of using a supervised image encoder, we employ recent advances in self-supervised representation learning (Chen et al., 2020; Bardes et al., 2021) to pre-train domain- specific dense features that distinguish different CT image patterns and do not contain irrelevant low-level information, e.g. about image noise. For anomaly detection, we train a density model on top of the encoder representations that learns the distribution of these pre-trained dense features and later assigns high anomaly scores to image positions containing out-of-distribution features. More- over, we propose a novel self-supervised strategy of learning the auxiliary dense image features that can be used for conditioning in our density-based framework. Conditioning on these features drasti- cally simplifies the target conditional distribution and allows to learn it with a very simple gaussian model. We call our proposed framework SCREENER. We use SCREENER to train a fully self-supervised model for pathology segmentation on more than 30000 CT volumes covering both chest and abdomen anatomical regions. Surprisingly, despite huge variation in patterns across these anatomical regions, our model generalizes well to both. We show that our model is able to distinguish a wide range of pathologies in different organs from healthy image regions, as shown in the third row of Figure 1. We summarize our contributions below: • We show that dense self-supervised representations are favourable alternative to supervised feature extractors in density-based framework for visual anomaly segmentation. The pro- posed self-supervised framework is beneficial in the domains with scarce labeled data. • We further extend density-based UVAS framework by showing that instead of hand-crafted conditioning in density model, e.g. on positional encodings, one can learn data-driven condition variables in a self-supervised manner. To this end, we learn dense representations that are invariant to image masking, rendering them ignorant about local visual anomalies. Conditioning on these representations simplifies true conditional distribution and allows to achieve remarkable anomaly segmentation results with very simple gaussian model of conditional density. • Finally, this paper presents the first large-scale study of UVAS methods in 3D medical CT images. We show that the proposed density-based framework outperforms other UVAS methods on unsupervised semantic segmentation of a wide range of pathologies in different anatomical regions, including lung cancer, pneumonia, liver tumors and kidney tumors. 2 METHOD 2.1 OVERVIEW & INTUITION Our method assumes that a certain pathological pattern appears in CT images more rarely than any healthy pattern. To formalize this assumption we introduce two models, which we call descriptor model and density model. Descriptor model encodes image patterns and density model learns their distribution. For a given 3D image x ∈ RH×W ×S, descriptor model fθdesc (we use a fully-convolutional net) pro- duces feature maps y ∈ Rh×w×s×ddesc of individ- ual image positions P = {p | p ∈ [1, . . . , h] × [1, . . . , w] × [1, . . . , s]}. Descriptor model should be trained to produce informative descriptors that capture similarities and differences between different image patterns. We describe how we train it in Section 2.2. containing h · w · s descriptors {y[p]}p∈P ⊂ Rddesc Density model qθdens (y) estimates the true marginal density qY (y) of individual descriptors produced by the pre-trained descriptor model (random vector Y denotes a descriptor of a random position in a random image). In Section 2.4 we describe different parametrizations of qθdens (y). If a certain image contains some abnormal pattern at position p we expect that a proper descriptor model would produce a descriptor y[p] in a low density region and an accurate density model would yield a low value of qθdens (y[p]). Conversely, if an image is normal we expect high density model predictions {qθdens (y[p])}p∈P at every position of the image. Therefore, at the inference stage, we use negative log-density values {− log qθdens (y[p])}p∈P as anomaly segmentation scores. Our framework also involves an important generalization of the above approach which is based on the idea of conditioning. Instead of modeling the complex marginal distribution of all patterns that appear in CT images, we may learn their conditional distribution given a certain condition. For example, one can imagine a distribution of radiological patterns appearing at a certain anatomical region, or at a certain patients’ age. To implement this idea we introduce a third model, which we refer to as condition model. Condition model provides auxiliary information about the image or individual image positions. For- mally, similar to the descriptor model, condition model gθcond produces a map c ∈ Rh×w×s×dcond containing feature vectors {c[p]}p∈P ⊂ Rdcond of individual image positions, which we call condi- tions. We describe different options for condition model in Section 2.3. If condition model is given, density model qθdens (y | c) becomes conditional. It learns the true con- ditional density qY |C(y | c) (random vectors Y , C denote the descriptor and condition taken at a random position in a random image). In this conditional framework, we use negative log-density value − log qθdens (y[p] | c[p]) as anomaly score of position p at a particular image. Section 2.4 de- scribes conditional density models qθdens (y | c) used in our method. In conditional framework, density model qθdens (y | c) can also be viewed as a predictive model which tries to predict descriptors based on conditions. In this interpretation, negative log-density scores {− log qθdens (y[p] | c[p])}p∈P play the role of prediction errors. If descriptor y[p] lies in low con- ditional density region, it means that the actual pattern at position p differs from the patterns which we would expect at this position given the condition c[p]. 2.2 DESCRIPTOR MODELS Descriptor model plays a crucial role in our method. First, it must discriminate pathological patterns from normal ones, otherwise they do not get different anomaly scores in our framework. At the same time, descriptors should contain as little irrelevant information as possible. For example, if descriptors capture noise which is present in CT images, density model assigns high anomaly scores to healthy image regions containing extreme noise values, potentially causing false positive errors. Density-based UVAS methods for natural images Gudovskiy et al. (2022); Zhou et al. (2024) obtain dense image descriptors from hidden layers of a fully-convolutional neural network pre-trained on ImageNet in a supervised manner. However, in specific image domains, supervised representation learning may be suboptimal due to the scarcity and insufficient diversity of labeled data. On the contrary, self-supervised approach used in our UVAS framework lacks these limitations. 3 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.1 DESCRIPTOR MODEL', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2025 216 217 ', 'paragraph_idx': 20, 'before_section': None, 'context_before': '214 215 ', 'modified_lines': 'Figure 2: Illustration of SCREENER. First, we train a self-supervised descriptor model to produce informative feature maps invariant to image crops and color jitter. Second, we train a self-supervised condition model similarly but also enforce invariance to random block masking, ensuring its feature maps are insensitive to anomalies and reflect only contextually inferable information. Finally, the density model learns the conditional distribution pY |C(y | c) of feature vectors Y = y[p] and C = c[p] from the descriptor and condition models at a given position p. Anomaly score maps are then obtained by applying the density model pixel-wise, efficiently implemented with 1 × 1 × 1 convolutions. apply random augmentations, such as color jitter. The augmented crops, denoted as x(1) and x(2), are fed into the descriptor model, producing feature maps y(1) and y(2). From the overlapping region of the two crops, we randomly select n positions. For each position p, we compute its coordinates p(1) and p(2) relative to the augmented crops and extract descriptors y(1) = y(1)[p(1)] and y(2) = y(2)[p(2)]. These descriptors form a positive pair, as they correspond to the same position in the original image but are predicted from different augmentations. 4 ', 'original_lines': ' Below we describe the descriptor models’ training pipeline, illustrated in the upper part of Figure 2. From a random image x we select two random overlapping 3D crops of random size, resize them to H × W × S resolution and apply random color augmentations to them. We denote the obtained augmented crops as x(1) and x(2). After feeding each of them to the descriptor model we obtain two maps y(1), y(2) of their dense descriptors. Next, we select n random positions from the crops’ overlap region in the seed image. For each selected position p, we calculate its coordinates p(1) and p(2) w.r.t. the both augmented crops and obtain two descriptors y(1) = y(1)[p(1)] and y(2) = y(2)[p(2)]. We call descriptors (y(1), y(2)) a positive pair since they are predicted based on different augmented crops but correspond to the same position in the seed image. After repeating the described procedure for m seed images, we obtain a batch of N = n · m posi- tive pairs which we denote as {(y(1) i=1. The similar strategy of sampling a batch of dense positive pairs was used in Goncharov et al. (2023). Given the batch of positive pairs, the descriptor model training admits different SSL objectives optimization. In this work, we consider two promi- nent methods: contrastive learning SimCLR Chen et al. (2020) and VICReg Bardes et al. (2021). , y(2) i )}N i SimCLR In contrastive model, we feed the descriptors {(y(1) projector gθproj and l2-normalize them to obtain embeddings z(k) )/∥gθproj(y(k) where k = 1, 2 and i = 1, . . . N . Finally, we minimize the InfoNCE loss Chen et al. (2020): , y(2) i i = gθproj(y(k) i=1 to the trainable MLP- )∥ ∈ Rd, )}N i i i min θdesc,θproj N (cid:88) (cid:88) i=1 k∈{1,2} − log exp(⟨z(1) i exp(⟨z(1) i , z(2) i ⟩/τ ) + (cid:80) j̸=i , z(2) i (cid:80) ⟩/τ ) l∈{1,2} exp(⟨z(k) i . (1) , z(l) j ⟩/τ ) VICReg In VICReg model, we map the descriptors to high-dimensional embeddings via a train- able MLP-expander: z(k) ) ∈ RD, where k = 1, 2 and i = 1, . . . N . Then we compute two unbiased estimates of the mean vector and the covariance matrix of random descriptor’s embed- i=1(z(k) ding: z(k) = 1 i − z(k))⊤. At last, we minimize n the VICReg objective Bardes et al. (2021), comprised of invariance, variance and covariance terms: i = hθexpand (y(k) i − z(k))(z(k) , C (k) = 1 i=1 z(k) (cid:80)N (cid:80)N N −1 i i min θdesc,θproj α · Linv + β · Lvar + γ · Lcov. (2) i=1 ∥z(1) The first term Linv = 1 tations. The second term Lvar = (cid:80) N ·D (cid:80)N i − z(2) i ∥2 penalizes embeddings to be invariant to augmen- D (cid:80) i=1 enforces individual em- i,i + ε 0, 1 − C (k) max (cid:113) 1 D (cid:18) (cid:19) k∈{1,2} beddings’ dimensions to have unit variance. The third term Lcov = (cid:80) encourages different embedding’s dimensions to be uncorrelated, increasing the total information content of the embeddings. C (k) i,j k∈{1,2} (cid:80) (cid:17)2 1 D i̸=j (cid:16) 2.3 CONDITION MODELS In this work we compare three condition models: sin-cos positional encodings , anatomical posi- tional embeddings (APE) Goncharov et al. (2024) and our novel self-supervised condition model producing dense embeddings which are invariant w.r.t. image masking. Sin-cos positional encodings The existing density-based UVAS methods Gudovskiy et al. (2022); Zhou et al. (2024) for natural images use standard sin-cos positional encodings for conditioning. We also employ them as an option for condition model in our framework. However, let us clarify what we mean by sin-cos positional embeddings in CT images. Note that we never apply descriptor, condition or density models to the whole CT images due to memory constraints. Instead, at all the training stages and at the inference stage of our framework we always apply them to image crops of size H × W × S, as described in Sections 2.2, 2.4. When we say that we apply sin-cos positional embeddings condition model to an image crop, we mean that compute sin-cos encodings of absolute positions of its pixels w.r.t. to the whole CT image. 5 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Table 1: Summary information on the datasets that we use for training and testing of all models. Dataset ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '322 323 ', 'modified_lines': '', 'original_lines': 'Anatomical positional embeddings To implement the idea of learning the conditional distribu- tion of image patterns at each certain anatomical region, we need a condition model producing conditions c[p] that encode which anatomical region is present in the image at every position p. Supervised model for organs’ semantic segmentation would be an ideal condition model for this purpose. However, to our best knowledge, there is no supervised models that are able to segment all organs in CT images. That is why, we decided to try the self-supervised APE Goncharov et al. (2024) model which produces continuous embeddings of anatomical position of CT image pixels. Masking-invariant model Our last condition model implements the following idea. Suppose that a certain region of CT image is masked out and we try to guess the missing content based on the context. In most cases, we have no reason to expect that a pathology is hidden under the mask and would better bet than the masked region is healthy. Following this intuition, condition c[p] should play a role of a global context of the position p which contains a lot of information, but gives no reason to expect a pathology at this position. We propose to learn such conditions as dense self-supervised representations which are invariant w.r.t. to image masking. The training of such a condition model completely coincides with the VICReg descriptor model’s training, described in Section 2.2, with the only difference that we add image masking in the augmentations. See middle part of Figure 2) for illustration. 2.4 DENSITY MODELS As described in Section 2.1 we use two types of density models: marginal and conditional. When training a marginal density model qθdense (y) we sample a batch of m random crops {xi}m i=1 of size H × W × S from different CT images. We feed each crop to the pre-trained descriptor model to obtain their descriptor maps {yi}m i=1 of size h × w × s and optimize the negative log-likelihood loss: min θdens 1 m · |P | m (cid:88) (cid:88) i=1 p∈P − log qθdense (yi[p]). (3) When training a conditional density model qθdense(y | c), we also apply the condition model to obtain crops’ condition maps {ci}m i=1 and optimize the conditional negative log-likelihood loss: min θdens 1 m · |P | m (cid:88) (cid:88) i=1 p∈P − log qθdense (yi[p] | ci[p]). (4) At the inference stage, we split an input CT image into M patches {xi}M i=1 of size H × W × S (patches may overlap). To each patch we first apply the descriptor model. Then, in un- conditional framework we apply the trained marginal density model to obtain anomaly map {− log qθdense (yi[p])}p∈P of size h × w × s. In conditional framework, we apply the condition model and the conditional density model to obtain anomaly map {− log qθdense (yi[p] | ci[p])}p∈P . Then we upsample these patch-wise anomaly maps to the H × W × S size and aggregate them into a single anomaly map of the whole input CT image (we average the predictions in the patches’ overlap regions). Below we describe two parametrizations of marginal and conditional density models: gaussian as a simple baseline, and normalizing flow as an expressive generative model with allows tractable density estimation. Gaussian Gaussian marginal density model is written as − log qθdens (y) = 1 2 (y − µ)⊤Σ−1(y − µ) + 1 2 log det Σ + const, (5) where the trainable parameters θdens are mean vector µ and diagonal covariance matrix Σ. Conditional gaussian density model is written as − log qθdens (y | c) = 1 2 (y −µθdens (c))⊤ (Σθdens (c))−1 (y −µθdens (c))+ 1 2 log det Σθdens (c)+const, (6) 6 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.3 MAIN RESULTS', 'after_section': '4.3 MAIN RESULTS', 'context_after': 'Synthetic-based models yield many false negatives because during training they were penalized to predict zero scores in the unlabeled real pathological regions which may appear in training images. Meanwhile, MSFlow heavily relies on an ImageNet-pre-trained encoder which produces irrelevant features of 3D medical CT images. Our density-based model with domain-specific self-supervised features outperforms baselines by a large margin. 378 379 ', 'paragraph_idx': 46, 'before_section': '4.3 MAIN RESULTS', 'context_before': 'MSFlow (density-based method on top of ImageNet features). Quantitative comparison is presented in table 2. Qualitative comparison is shown in Figure 3. ', 'modified_lines': 'The analysis of the poor performance of the reconstruction-based methods is given in Appendix B. 4.4 CONDITION AND DENSITY MODELS’ ABLATION Table 3 demonstrates ablation study results. We test different options for condition and density models described in Sections ?? and 3.3, correspondingly. We use the VICReg descriptor with ddesc = 32 as it shows slightly better results than contrastive objective as reported in Section 4.5. All conditioning strategies yield results similar to the unconditional model when using expressive normalizing flow density model. However, in experiments with simple gaussian density models, we see that the results significantly improve as the condition model becomes more informative. Noticeably, our proposed masking-equivariant condition model allows gaussian model to compete with complex flow-based models and achieve very strong anomaly segmentation results. 7 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 Under review as a conference paper at ICLR 2025 ', 'original_lines': 'The analysis of the poor performance of the reconstruction-based methods is given in Appendix A. Table 2: Quantitative comparison of our best model and the existing unsupervised visual anomaly segmentation methods on pathology segmentation in 3D medical CT images. Model AUROC AUROC up to FPR0.3 AUPRO up to FPR0.3 LIDC MIDRC KiTS LiTS LIDC MIDRC KiTS LiTS LIDC MIDRC KiTS LiTS Autoencoder f-AnoGAN DRAEM MOOD-Top1 MSFlow Screener (ours) 0.71 0.82 0.63 0.79 0.70 0.96 0.65 0.66 0.72 0.79 0.66 0.89 0.66 0.67 0.82 0.77 0.64 0.90 0.68 0.67 0.83 0.80 0.64 0.94 0.31 0.52 0.21 0.43 0.26 0.89 0.21 0.21 0.31 0.43 0.20 0.68 0.24 0.24 0.50 0.40 0.18 0.69 0.25 0.22 0.51 0.46 0.17 0.80 0.59 0.46 0.17 0.32 0.21 0.66 0.24 0.18 0.20 0.29 0.14 0.46 0.26 0.24 0.50 0.40 0.19 0.68 0.37 0.22 0.57 0.32 0.17 0.66 8 ', 'after_paragraph_idx': 46, 'before_paragraph_idx': 45}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Table 4: Ablation study of the effect of descriptor model. In these experiments we do not use condi- tioning and use normalizing flow as a marginal density model. We include MSFlow to demonstrate that descriptor model pre-trained on ImageNet is inappropriate for 3D medical CT images. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '430 431 ', 'modified_lines': '', 'original_lines': ' Under review as a conference paper at ICLR 2025 Table 3: Ablation study of the effect of conditional model for the fixed descriptor model (VICReg) and different conditional density models (gaussian and normalizing flow). None in Condtion model column means that results are given for a marginal density model. Descriptor model Condition model Density model AUROC AUROC up to FPR0.3 AUPRO up to FPR0.3 VICReg, ddesc = 32 None Sin-cos pos. APE VICReg, ddesc = 32 VICReg, ddesc = 32 Masking-equiv. VICReg, ddesc = 32 VICReg, ddesc = 32 VICReg, ddesc = 32 VICReg, ddesc = 32 Masking-equiv. None Sin-cos pos. APE LIDC MIDRC KiTS LiTS LIDC MIDRC KiTS LiTS LIDC MIDRC KiTS LiTS Gaussian Gaussian Gaussian Gaussian Norm. flow Norm. flow Norm. flow Norm. flow 0.81 0.82 0.88 0.96 0.96 0.96 0.96 0.96 0.81 0.80 0.80 0.84 0.89 0.89 0.88 0.87 0.61 0.74 0.78 0.87 0.88 0.90 0.89 0.90 0.71 0.77 0.86 0.90 0.93 0.94 0.94 0.93 0.41 0.45 0.67 0.90 0.89 0.89 0.87 0.88 0.47 0.42 0.46 0.58 0.68 0.68 0.65 0.64 0.12 0.26 0.34 0.58 0.62 0.69 0.67 0.68 0.22 0.34 0.56 0.71 0.78 0.80 0.80 0.80 0.46 0.40 0.43 0.64 0.67 0.66 0.64 0.65 0.62 0.50 0.38 0.41 0.46 0.46 0.43 0.40 0.13 0.27 0.35 0.57 0.62 0.68 0.66 0.67 0.28 0.32 0.55 0.48 0.65 0.66 0.66 0.63 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Reconstruction-based Trained solely on normal images, reconstruction-based approaches (Baur et al., 2021; Kingma & Welling, 2013; Schlegl et al., 2019), poorly reconstruct anomalous regions, ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'training images (Marimont & Tarroni, 2023). A segmentation model is trained to predict binary masks of corrupted regions, providing well-calibrated anomaly scores for individual pixels. While straightforward to train, these models may overfit to synthetic anomalies and struggle with real ones. ', 'modified_lines': '', 'original_lines': ' 9 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 Under review as a conference paper at ICLR 2025 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Domain-specific and self-supervised SCREENER is no longer inhibited by limitations of the earlier methods and outperforms them by a large margin, which can be seen from empirical results obtained ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'component model, comprised of (i) self-supervised representation learning descriptor for image features, (ii) density-based anomaly detection model that learns distribution of the features, and (iii) conditioning model containing auxiliary information which boosts simpler density models. ', 'modified_lines': '', 'original_lines': ' 10 Under review as a conference paper at ICLR 2025 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2025 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '645 646 647 ', 'modified_lines': '', 'original_lines': ' Denis Gudovskiy, Shun Ishizaka, and Kazuki Kozuka. Cflow-ad: Real-time unsupervised anomaly detection with localization via conditional normalizing flows. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 98–107, 2022. Nicholas Heller, Niranjan Sathianathen, Arveen Kalapara, Edward Walczak, Keenan Moore, Heather Kaluzniak, Joel Rosenberg, Paul Blake, Zachary Rengel, Makinna Oestreich, Joshua Dean, Michael Tradewell, Aneri Shah, Resha Tejpaul, Zachary Edgerton, Matthew Peterson, Shaneabbas Raza, Subodh Regmi, Nikolaos Papanikolopoulos, and Christopher Weight. The kits19 challenge data: 300 kidney tumor cases with clinical context, ct semantic segmentations, and surgical outcomes, 2020. URL https://arxiv.org/abs/1904.00445. Yuanfeng Ji, Haotian Bai, Chongjian Ge, Jie Yang, Ye Zhu, Ruimao Zhang, Zhen Li, Lingyan Zhanng, Wanling Ma, Xiang Wan, et al. Amos: A large-scale abdominal multi-organ benchmark for versatile medical image segmentation. Advances in neural information processing systems, 35:36722–36732, 2022. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. Advances in neural information processing systems, 31, 2018. Polina Kirichenko, Pavel Izmailov, and Andrew G Wilson. Why normalizing flows fail to detect out-of-distribution data. Advances in neural information processing systems, 33:20578–20589, 2020. Sergio Naval Marimont and Giacomo Tarroni. Achieving state-of-the-art performance in the medical out-of-distribution (mood) challenge using plausible synthetic anomalies, 2023. Walter HL Pinaya, Mark S Graham, Robert Gray, Pedro F Da Costa, Petru-Daniel Tudosiu, Paul Wright, Yee H Mah, Andrew D MacKinnon, James T Teo, Rolf Jager, et al. Fast unsupervised brain anomaly detection and segmentation with diffusion models. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 705–714. Springer, 2022. Chongyu Qu, Tiezheng Zhang, Hualin Qiao, Yucheng Tang, Alan L Yuille, Zongwei Zhou, et al. Abdomenatlas-8k: Annotating 8,000 ct volumes for multi-organ segmentation in three weeks. Advances in Neural Information Processing Systems, 36, 2024. Karsten Roth, Latha Pemula, Joaquin Zepeda, Bernhard Sch¨olkopf, Thomas Brox, and Peter Gehler. Towards total recall in industrial anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14318–14328, 2022. Marco Rudolph, Bastian Wandt, and Bodo Rosenhahn. Same same but differnet: Semi-supervised defect detection with normalizing flows. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 1907–1916, 2021. Thomas Schlegl, Philipp Seeb¨ock, Sebastian M Waldstein, Georg Langs, and Ursula Schmidt- Erfurth. f-anogan: Fast unsupervised anomaly detection with generative adversarial networks. Medical image analysis, 54:30–44, 2019. Joan Serr`a, David ´Alvarez, Vicenc¸ G´omez, Olga Slizovskaia, Jos´e F N´u˜nez, and Jordi Luque. In- put complexity and out-of-distribution detection with likelihood-based generative models. arXiv preprint arXiv:1909.11480, 2019. Nina Shvetsova, Bart Bakker, Irina Fedulova, Heinrich Schulz, and Dmitry V Dylov. Anomaly de- tection in medical imaging with deep perceptual autoencoders. IEEE Access, 9:118571–118583, 2021. National Lung Screening Trial Research Team. The national lung screening trial: overview and study design. Radiology, 258(1):243–253, 2011. 12 Under review as a conference paper at ICLR 2025 Emily Tsai, Scott Simpson, Matthew P. Lungren, Michelle Hershman, Leonid Roshkovan, Errol Colak, Bradley J. Erickson, George Shih, Anouk Stein, Jayashree Kalpathy-Cramer, Jody Shen, Mona A.F. Hafez, Susan John, Prabhakar Rajiah, Brian P. Pogatchnik, John Thomas Mongan, Emre Altinmakas, Erik Ranschaert, Felipe Campos Kitamura, Laurens Topff, Linda Moy, Jef- frey P. Kanne, and Carol C. Wu. Medical imaging data resource center - rsna international covid radiology database release 1a - chest ct covid+ (midrc-ricord-1a), 2020. Shuyuan Wang, Huiyuan Luo, Qi Li, Chengkan Lv, and Zhengtao Zhang. Pouta-produce once, utilize twice for anomaly detection. Minghui Yang, Peng Wu, and Hui Feng. Memseg: A semi-supervised method for image surface defect detection using differences and commonalities. Engineering Applications of Artificial In- telligence, 119:105835, 2023. Jiawei Yu, Ye Zheng, Xiang Wang, Wei Li, Yushuang Wu, Rui Zhao, and Liwei Wu. Fast- flow: Unsupervised anomaly detection and localization via 2d normalizing flows. arXiv preprint arXiv:2111.07677, 2021. Vitjan Zavrtanik, Matej Kristan, and Danijel Skoˇcaj. Draem-a discriminatively trained reconstruc- tion embedding for surface anomaly detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8330–8339, 2021. Hui Zhang, Zheng Wang, Zuxuan Wu, and Yu-Gang Jiang. Diffusionad: Denoising diffusion for anomaly detection. arXiv preprint arXiv:2303.08730, 2023. Yixuan Zhou, Xing Xu, Jingkuan Song, Fumin Shen, and Heng Tao Shen. Msflow: Multiscale flow- based framework for unsupervised anomaly detection. IEEE Transactions on Neural Networks and Learning Systems, 2024. David Zimmerer, Jens Petersen, Gregor K¨ohler, Paul J¨ager, Peter Full, Tobias Roß, Tim Adler, Annika Reinke, Lena Maier-Hein, and Klaus Maier-Hein. Medical out-of-distribution analysis challenge 2022. Publisher: Zenodo, 2021. 13 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-28 11:58:49
ICLR.cc/2025/Conference
hYoDJzSqLs
ulVqKlzPUH
[{'section': '2 BACKGROUND & NOTATION', 'after_section': '2 BACKGROUND & NOTATION', 'context_after': 'This framework can be extended using a conditioning mechanism. For each position p, an auxiliary variable c[p], referred to as a condition, is introduced. Let C denote the condition at a random posi- ', 'paragraph_idx': 12, 'before_section': '2 BACKGROUND & NOTATION', 'context_before': 'pattern at position p, the descriptor y[p] is expected to lie in a low-density region, yielding a low qθdens (y[p]). Conversely, normal patterns produce high densities. During inference, the negative log- density values, − log qθdens (y[p]) are used as anomaly segmentation scores. Density models we use ', 'modified_lines': 'in SCREENER are simple Gaussian model and more expressive normalizing flow (see Appendix E). ', 'original_lines': 'in SCREENER are simple Gaussian model and more expressive normalizing flow (see Appendix D. ', 'after_paragraph_idx': 13, 'before_paragraph_idx': 12}, {'section': '2 BACKGROUND & NOTATION', 'after_section': '2 BACKGROUND & NOTATION', 'context_after': 'Self-supervised learning leverages unlabelled data to learn representations invariant to transforma- tions through auxiliary tasks. SSL objectives align embeddings of augmented views x(1), x(2) of the ', 'paragraph_idx': 13, 'before_section': '2 BACKGROUND & NOTATION', 'context_before': 'density qY |C(y|c) is learned for each condition c. During inference, the negative log-conditional densities, − log qθdens (y[p] | c[p]), are used as anomaly scores. State-of-the-art methods (Gudovskiy et al., 2022; Zhou et al., 2024) adopt this conditional framework and use sinusoidal positional en- ', 'modified_lines': 'codings as conditions. See detailed descriptions for positional condition alternatives in Appendix D. ', 'original_lines': 'codings as conditions. ', 'after_paragraph_idx': 14, 'before_paragraph_idx': 13}, {'section': '3.1 DESCRIPTOR MODEL', 'after_section': '3.1 DESCRIPTOR MODEL', 'context_after': 'The descriptor model training pipeline is illustrated in the upper part of Figure 2. From a random image x, we extract two overlapping 3D crops of random size, resize them to H × W × S, and 3 Under review as a conference paper at ICLR 2025 162 163 ', 'paragraph_idx': 17, 'before_section': '3.1 DESCRIPTOR MODEL', 'context_before': 'To pre-train dense descriptors, we use dense joint embedding SSL methods (Section 2 and Ap- pendix C), which allow explicit control over the information content of the representations. Specifi- cally, we penalize descriptors for failing to distinguish between different positions within or across ', 'modified_lines': 'images, ensuring they capture spatially discriminative features. Simultaneously, we enforce invari- ance to low-level perturbations, such as cropping and color jitter, to eliminate irrelevant information. apply random augmentations, such as color jitter. The augmented crops, denoted as x(1) and x(2), are fed into the descriptor model, producing feature maps y(1) and y(2). Figure 2: Illustration of SCREENER. First, we train a self-supervised descriptor model to produce in- formative feature maps invariant to image crops and color jitter. Second, we train a self-supervised condition model similarly but also enforce invariance to random block masking, ensuring its fea- ture maps are insensitive to anomalies and reflect only contextually inferable information. Finally, the density model learns the conditional distribution pY |C(y | c) of feature vectors Y = y[p] and C = c[p] from the descriptor and condition models at a given position p. Anomaly score maps are obtained by applying the density model pixel-wise, efficiently implemented by 1 × 1 × 1 convolu- tions. From the overlapping region of the two crops, we randomly select n positions. For each position p, we compute its coordinates p(1) and p(2) relative to the augmented crops and extract descriptors y(1) = y(1)[p(1)] and y(2) = y(2)[p(2)]. These descriptors form a positive pair, as they correspond to the same position in the original image but are predicted from different augmentations. Repeating this process for m seed images yields a batch of N = n · m positive pairs, denoted as {(y(1) i=1. This strategy for sampling dense positive pairs follows the approach in (Gon- i , y(2) i )}N 4 ', 'original_lines': 'images, ensuring they capture spatially discriminative features. Simultaneously, we enforce in- variance to low-level perturbations, such as random crops and color jitter, to eliminate irrelevant information. ', 'after_paragraph_idx': 18, 'before_paragraph_idx': 17}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2025 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '214 215 ', 'modified_lines': '', 'original_lines': 'Figure 2: Illustration of SCREENER. First, we train a self-supervised descriptor model to produce informative feature maps invariant to image crops and color jitter. Second, we train a self-supervised condition model similarly but also enforce invariance to random block masking, ensuring its feature maps are insensitive to anomalies and reflect only contextually inferable information. Finally, the density model learns the conditional distribution pY |C(y | c) of feature vectors Y = y[p] and C = c[p] from the descriptor and condition models at a given position p. Anomaly score maps are then obtained by applying the density model pixel-wise, efficiently implemented with 1 × 1 × 1 convolutions. apply random augmentations, such as color jitter. The augmented crops, denoted as x(1) and x(2), are fed into the descriptor model, producing feature maps y(1) and y(2). From the overlapping region of the two crops, we randomly select n positions. For each position p, we compute its coordinates p(1) and p(2) relative to the augmented crops and extract descriptors y(1) = y(1)[p(1)] and y(2) = y(2)[p(2)]. These descriptors form a positive pair, as they correspond to the same position in the original image but are predicted from different augmentations. 4 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'charov et al., 2023). Using this batch, we optimize the descriptor model with SSL objectives. In this work, we employ two prominent objectives: InfoNCE (Chen et al., 2020) and VICReg (Bardes et al., 2021), detailed in Appendix C. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '268 269 ', 'modified_lines': '', 'original_lines': ')}N , y(2) i Repeating this process for m seed images yields a batch of N = n · m positive pairs, denoted as {(y(1) i=1. This strategy for sampling dense positive pairs follows the approach in (Gon- i ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'To achieve these properties, we learn conditions c[p] through a self-supervised condition model gθcond , which has a fully-convolutional architecture similar to the descriptor model. The model gen- erates feature maps c ∈ Rh×w×s×ddesc The learned conditions c[p] are designed to ignore the presence of pathologies, as such information cannot be consistently inferred from masked views. Instead, the condition model likely encodes patient-level attributes (e.g., age, gender) and position-specific attributes (e.g., anatomical region, tissue type) that are predictable from masked contexts. Conditioning on these variables simplifies density estimation, as conditional distributions are often less complex than marginal distributions. Moreover, conditioning can improve fairness: for instance, if certain anatomical regions or demo- graphic groups are underrepresented in the training data, an unconditional density model might treat these as anomalies. In contrast, a model conditioned on gender or anatomical region would handle such cases more appropriately by treating them within their specific context. 3.3 DENSITY MODELS min 1 m · |P | ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'from this expectation –indicating low conditional probability– it is classified as an anomaly. Building on this intuition, we propose that the condition c[p] in the conditional density-based UVAS ', 'modified_lines': 'framework should capture the global context of the image position p. Global implies that c[p] must be inferable from various masked views of the image. At the same time, conditions may vary across different regions of the image to encode position-specific information, such as anatomical location or tissue type. that are invariant to image masking, providing a condition for each position in the input image. The training process mirrors that of the VICReg descriptor model (Section 3.1), with the addition of masking as part of the augmentations. An illustration of this approach is shown in the middle part of Figure 2. To train a conditional density model, qθdense (y | c), we sample a batch of m random crops, {xi}m i=1, each of size H × W × S, from different CT images. Each crop is passed through the pre-trained descriptor and condition models to produce descriptor maps, {yi}m i=1, both of size h × w × s. We then optimize the conditional negative log-likelihood loss: i=1, and condition maps, {ci}m θdense ', 'original_lines': 'framework should capture the global context of the image position p. Global implies that c[p] must be inferable from various masked views of the image, ensuring robustness. At the same time, conditions can vary across different regions of the image to encode position-specific information, such as anatomical location or tissue type. , providing a condition for each position in the input image. To achieve this, we learn conditions as dense self-supervised representations that are invariant to image masking. The training process mirrors that of the VICReg descriptor model (Section 3.1), with the addition of masking as part of the augmentations. An illustration of this approach is shown in the middle part of Figure 2. When training a conditional density model qθdense (y | c), we sample a batch of m random crops {xi}m i=1 of size H ×W ×S from different CT images. We feed each crop to the pre-trained descriptor and condition models to obtain their descriptor maps {yi}m i=1 of size h × w × s. Then we optimize the conditional negative log-likelihood loss: i=1 and condition maps {ci}m θdens ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.3 DENSITY MODELS', 'after_section': None, 'context_after': '5 ', 'paragraph_idx': 29, 'before_section': '3.3 DENSITY MODELS', 'context_before': '− log qθdense (yi[p] | ci[p]). ', 'modified_lines': 'At inference, an input CT image is divided into M overlapping patches, {xi}M i=1, each of size H × W × S. For each patch, we apply the descriptor, condition, and conditional density models to compute the anomaly map, {− log qθdense(yi[p] | ci[p])}p∈P . These patch-wise anomaly maps are upsampled to H × W × S and aggregated into a single anomaly map for the entire CT image by averaging predictions in overlapping regions. We explore two parameterizations for the marginal and conditional density models: Gaussian dis- tributions as a straightforward baseline and normalizing flows as an expressive generative model enabling tractable density estimation. For further details, please refer to Appendix E. ', 'original_lines': '(1) At the inference stage, we split an input CT image into M patches {xi}M i=1 of size H × W × S (patches may overlap). To each patch we first apply the descriptor model, the condition model and the conditional density model to obtain anomaly map {− log qθdense (yi[p] | ci[p])}p∈P . Then we upsample these patch-wise anomaly maps to the H × W × S size and aggregate them into a single anomaly map of the whole input CT image (we average the predictions in the patches’ overlap regions). We use two parametrizations of marginal and conditional density models: gaussian as a simple baseline, and normalizing flow as an expressive generative model with allows tractable density esti- mation. Please, refer to Appendix D for details. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 29}, {'section': '4.2 EVALUATION METRICS', 'after_section': '4.2 EVALUATION METRICS', 'context_after': 'testing CT datasets contain annotations of only specific types of tumors, while other pathologies may be present in the images but not included in the ground truth masks. It makes impossible to fairly estimate metrics like Dice score or Hausdorff distance, which count our model’s true positive ', 'paragraph_idx': 37, 'before_section': '4.2 EVALUATION METRICS', 'context_before': 'employed in MVTecAD benchmark (Bergmann et al., 2021): pixel-level AUROC and AUPRO cal- culated up to 0.3 FPR. We also compute area under the whole pixel-level ROC-curve. Despite, our model can be viewed as semantic segmentation model, we do not report standard segmentation ', 'modified_lines': 'metrics, e.g. Dice score , due to the following reasons. As we mention in Section 4.1, available ', 'original_lines': 'metrics, e.g. Dice score , due to the following reasons. As we mention in Sections ??, available ', 'after_paragraph_idx': 37, 'before_paragraph_idx': 37}, {'section': '4.4 CONDITION AND DENSITY MODELS’ ABLATION', 'after_section': '4.4 CONDITION AND DENSITY MODELS’ ABLATION', 'context_after': 'ddesc = 32 as it shows slightly better results than contrastive objective as reported in Section 4.5. All conditioning strategies yield results similar to the unconditional model when using expressive normalizing flow density model. However, in experiments with simple gaussian density models, we see that the results significantly improve as the condition model becomes more informative. 7 ', 'paragraph_idx': 50, 'before_section': None, 'context_before': '4.4 CONDITION AND DENSITY MODELS’ ABLATION ', 'modified_lines': 'Table 3 demonstrates ablation study of our proposed condition model. We compare our condition model with two baselines: van¨ıla sin-cos positional encodings and anatomical positional embed- dings (Goncharov et al., 2024), described in Appendix D. We evaluate condition models in combi- nation with different density models, described in Section 3.3. We use the VICReg descriptor with ', 'original_lines': 'Table 3 demonstrates ablation study results. We test different options for condition and density models described in Sections ?? and 3.3, correspondingly. We use the VICReg descriptor with Noticeably, our proposed masking-equivariant condition model allows gaussian model to compete with complex flow-based models and achieve very strong anomaly segmentation results. ', 'after_paragraph_idx': 50, 'before_paragraph_idx': None}, {'section': '4.4 CONDITION AND DENSITY MODELS’ ABLATION', 'after_section': None, 'context_after': '4.5 DESCRIPTOR MODELS’ ABLATION ', 'paragraph_idx': 52, 'before_section': None, 'context_before': '0.65 LIDC MIDRC KiTS LiTS LIDC MIDRC KiTS LiTS LIDC MIDRC KiTS LiTS ', 'modified_lines': ' Noticeably, our proposed masking-invariant condition model allows Gaussian model to compete with complex flow-based models and achieve very strong anomaly segmentation results. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '8 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'processing, as seen in DifferNet (Rudolph et al., 2021) and CFlow-AD (Gudovskiy et al., 2022), enhances defect detection by handling variations in defect size. However, CFlow-AD’s independent estimation of each feature vector lacks contextual awareness, resulting in fragmented and inaccurate ', 'modified_lines': '', 'original_lines': 'localization. MSFlow (Zhou et al., 2024) addresses this limitation by concurrently estimating fea- tures at all positions, incorporating contextual information through 3x3 convolutions and employing a fusion flow block for information exchange across scales. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.1 VISUAL UNSUPERVISED ANOMALY LOCALIZATION', 'after_section': None, 'context_after': 'Our method is related to FastFlow (Yu et al., 2021), CFlow (Gudovskiy et al., 2022) and MS- Flow (Zhou et al., 2024) methods for anomaly segmentation. Besides some technical differences ', 'paragraph_idx': 56, 'before_section': None, 'context_before': '483 484 485 ', 'modified_lines': ' localization. MSFlow (Zhou et al., 2024) addresses this limitation by concurrently estimating fea- tures at all positions, incorporating contextual information through 3x3 convolutions and employing a fusion flow block for information exchange across scales. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '10 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'arXiv:1312.6114, 2013. ', 'modified_lines': '', 'original_lines': 'Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. Advances in neural information processing systems, 31, 2018. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.1 DESCRIPTOR MODEL', 'after_section': None, 'context_after': ', z(l) ', 'paragraph_idx': 18, 'before_section': '3.1 DESCRIPTOR MODEL', 'context_before': ', ', 'modified_lines': '(1) ', 'original_lines': '(2) ', 'after_paragraph_idx': None, 'before_paragraph_idx': 18}, {'section': '2 BACKGROUND & NOTATION', 'after_section': None, 'context_after': 'i=1 ∥z(1) The first term Linv = 1 ', 'paragraph_idx': 14, 'before_section': None, 'context_before': 'α · Linv + β · Lvar + γ · Lcov, ', 'modified_lines': '(2) ', 'original_lines': '(3) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-28 12:16:31
ICLR.cc/2025/Conference
dgm9eAnxPm
bC6g8dERxz
[{'section': '1 (t) −', 'after_section': '1 (t) −', 'context_after': '1 ', 'paragraph_idx': 40, 'before_section': '1 (t) −', 'context_before': 'See also Appendix E.3 for the practical version of our law that accounts for the warmup phase by slightly changing the A · S−α ', 'modified_lines': '(t) term. See Appendix A.1 for the discussion about the simplification of the multi-power law. ', 'original_lines': ' (t) term. ', 'after_paragraph_idx': 40, 'before_paragraph_idx': 40}, {'section': 'Abstract', 'after_section': None, 'context_after': '7 10000200003000040000500006000070000Step3.23.33.43.5Loss60000625006500067500700003.203.223.243.263.283.30Learning RateLossMulti-powerOne-power0.00.51.01.52.02.53.0Learning Rate (x104)10000200003000040000500006000070000Step3.23.33.43.5Loss60000625006500067500700003.203.223.243.263.28Learning RateLossMulti-powerOne-power0.51.01.52.02.53.0Learning Rate (x104) Under review as a conference paper at ICLR 2025 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'monotonic schedules, although derived for monotonic decay schedules. The test set includes chal- lenging cases such as cyclic schedules and the random-polyline schedule, where LR values are randomly selected at every 8,000 steps and connected by a polyline. These experiments, conducted ', 'modified_lines': '', 'original_lines': 'on a 25M-parameter model over 72,000 steps, also represent a demanding long-horizon scenario. As 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Table 1: Model performance comparison. R2, MAE, RMSE, PredE, and WorstE are the coefficient of determination, Mean Absolute Error, Root Mean Square Error, Prediction Error, and Worst-case Error, respectively. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'as Loss Ends(C). The MPL uses 24000-step constant and cosine curves, marked as Loss Curves(M). Right: Comparison of MPL and CDSL fits on the open-source 7B OLMo curve generated with a linear schedule. ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 EMPIRICAL VALIDATION OF THE MULTI-POWER LAW', 'after_section': None, 'context_after': '4 THE MULTI-POWER LAW INDUCES BETTER LR SCHEDULES Due to the high cost of each pretraining run and the curse of dimensionality for LR schedules, it is generally very impossible to tune the LR for every training step. However, in this section, we show that by using the predicted final loss from the MPL, we can optimize the entire LR schedule to significantly reduce the final loss and beat the cosine schedule. 4.1 METHOD ', 'paragraph_idx': 46, 'before_section': '3 EMPIRICAL VALIDATION OF THE MULTI-POWER LAW', 'context_before': 'in Figure 20, MPL outperforms MTL in predicting loss reduction for WSD schedules with linear LR decay. In the highlighted regions, MPL achieves high accuracy in the decay stage, whereas MTL exhibits substantial error. A summary of prediction results across test sets is provided in Table 1, ', 'modified_lines': 'where MPL consistently outperforms MTL in both average and worst-case scenarios. The details of the MTL and its relation to the MPL can be found in Appendix A.1. 8 020000400006000080000100000120000Steps2.62.72.82.93.03.13.2Loss1240001260001280001300002.5502.575Loss CurvesChinchilla PredMulti-power PredLoss Ends(C)Loss Curves(M)Target Loss0100000200000300000400000500000Step2.02.12.22.32.42.5LosslossMulti-powerChinchilla0.51.01.52.02.53.0Learning Rate (x104)Linear Schedule Under review as a conference paper at ICLR 2025 ', 'original_lines': 'where MPL consistently outperforms MTL in both average and worst-case scenarios. 8 020000400006000080000100000120000Steps2.62.72.82.93.03.13.2Loss1240001260001280001300002.5502.575Loss CurvesChinchilla PredMulti-power PredLoss Ends(C)Loss Curves(M)Target Loss0100000200000300000400000500000Step2.02.12.22.32.42.5LosslossMulti-powerChinchilla0.51.01.52.02.53.0Learning Rate (x104)Linear Schedule Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 46}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Induced LR Schedule Outperforms Cosine Schedule. Figures 1 and 17 compare the induced schedules with the cosine and WSD schedules across models ranging from 25M to 400M. Figure 18 extends this comparison to longer training horizons. The induced schedules consistently outperform the cosine schedule, achieving a margin over 0.02, which is significant given the limited training scale. Notably, no WSD-like schedule is present in the training set, making the prediction of such loss curves an extrapolation by MPL. Characteristics of the Induced Schedules. The induced schedules provide insights into hyperpa- rameter tuning for WSD schedules. Observations from Figures 1 and 17 highlight the following: 1. Lower Ending LR Compared to Common Practice. Prior research (Hoffmann et al., 2022; Kaplan et al., 2020; H¨agele et al., 2024) recommends an ending LR of 1/10 of the peak LR. However, our findings suggest that a lower ending LR—typically below 1/20 of the peak LR—is ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'lows a Warmup-Stable-Decay (WSD) pattern, comprising two main stages after the warmup phase. It maintains a peak LR for an extended period, followed by a rapid decay to a near-zero LR, as shown in Figure 1 and Figure 17. ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '3. Alignment with Optimal Decay Ratios. The induced schedules align closely with the optimal 5 DISCUSSION ', 'paragraph_idx': 10, 'before_section': None, 'context_before': '2. Final Decay Resembles Power Decay. The relationship between normalized steps ˜t and nor- malized LRs ˜ηavg is well captured by f (x) = (1 − x)−α, where α ≈ 1.5 in our experiments. This simplified version, referred to as WSD with Sqrt-Cube Decay (WSDSC), is effective across ', 'modified_lines': 'various model sizes and types, as shown in Figures 8 and 21. While WSDSC does not match the optimized schedule, it simplifies application and informs schedule design. See Section A.2. decay steps identified via grid search, as illustrated in Figure 1. See Section H for details. ', 'original_lines': 'various model sizes and types, as shown in Figures 8 and 21. While WSDSC does not match the performance of the optimized schedule, it simplifies application and informs schedule design. Details are in Section A.2. decay steps identified via grid search, as illustrated in Figure 1. More details are provided in Section H. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Model Types. We validate the MPL on GPT-2 (Radford et al., 2019) and OLMo (Groeneveld 9 ', 'paragraph_idx': 4, 'before_section': None, 'context_before': 'multi-power law (MPL). The hyperparameters include the model types, model sizes, peak learning rates, and random seeds. In addition to empirical results, we can theoretically derive a multi-power law under a case with a quadratic loss function, providing insight into the nature of the MPL. ', 'modified_lines': 'et al., 2024) models to evaluate the generalizability of the MPL across model architectures. In the preceding experiments, we used the Llama2 (Touvron et al., 2023). For experiments on GPT-2, the validation process followed the procedure fit with curves of cosine and constant schedules, described ', 'original_lines': ' et al., 2024) models. In the preceding experiments, we used the Llama2 architecture (Touvron et al., 2023). To evaluate the generalizability of the MPL across model architectures, we conducted ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 EMPIRICAL DERIVATION OF THE MULTI-POWER LAW', 'after_section': None, 'context_after': 'Model Size. We extended the MPL and its induced schedule to a larger scale by training a 1B- parameter model on 144B data tokens. The MPL was fitted over 24,000 steps and successfully predicted loss curves up to 72,000 steps, as shown in Figure 8. We tested the performance of the ', 'paragraph_idx': 12, 'before_section': None, 'context_before': '43.56 44.09 (↑ 0.53%) ', 'modified_lines': 'in Section 3. For the 7B OLMo model, we fit the MPL on the open-source training curve, which employs a linear decay schedule, as shown on the right of Figure 7. Our results show that the MPL presents a high prediction accuracy across different model types for both self-run and open-source experiments. Details see Appendix I. ', 'original_lines': 'the following experiments. First, we applied the MPL to GPT-2 models. The validation process followed the procedure described in Appendix A.3. Second, we fitted the MPL to the open-source training curve of the 7B OLMo model, which employed a linear decay schedule, as shown on the right of Figure 7. Our results demonstrate that MPL provides a superior fit for the OLMo curve compared to the Chinchilla Law, which fails to adequately capture the curve without incorporating learning rate dependencies. Details see Appendix I. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 DISCUSSION', 'after_section': '5 DISCUSSION', 'context_after': 'Peak Learning Rate Ablation. We evaluated the applicability of the MPL across different peak learning rates. In previous experiments, the peak learning rate was fixed at 3 × 10−4. However, as shown in Figure 4, the empirical behavior of two-stage learning rate schedules deviates when the peak learning rate increases. To investigate this, we conducted experiments with peak learning rates Random Seed. We performed an ablation study to examine the impact of random seed variability Theoretical Results. We also present a theoretical analysis for quadratic loss functions optimizing Gradient Descent (GD) with noise. We can prove that the multi-power law arises when the Hessian and noise covariance matrices follow a power-law distribution in their eigenvalues. See Appendix K ', 'paragraph_idx': 49, 'before_section': '5 DISCUSSION', 'context_before': 'the induced schedules, we compared downstream task performance for models trained using the co- sine and induced schedules. As shown in Table 2, the induced schedule led to overall improvements in downstream tasks. Details see Appendix I. ', 'modified_lines': 'of 4 × 10−4 and 6 × 10−4. The MPL achieved an average R2 value of 0.9965 for the 4 × 10−4 case and 0.9940 for the 6 × 10−4 case, demonstrating consistently high accuracy. Details see Appendix I. Batch Size Ablation. We conduct ablation experiments on sequence batch sizes of 64 and 256 over 25M models, apart from 128 in previous experiments. The MPL presents a consistent accuracy with R2 higher than 0.9970. See Appendix I. on curves. We trained a 25M-parameter model for 24,000 steps using the cosine schedules with three random seeds. As shown in Figure 20, the standard deviation of the resulting loss values was approximately less than 0.001, establishing a lower bound for prediction errors. It highlights the prediction accuracy of the MPL discussed in Section 3. ', 'original_lines': ' of 4 × 10−4 and 6 × 10−4. The MPL achieved an average R2 value of 0.995 for the 4 × 10−4 case and 0.991 for the 6 × 10−4 case, demonstrating consistently high accuracy. Details see Appendix I. on training curves. Specifically, we trained a 25M-parameter model for 24,000 steps using the cosine learning rate schedules with three different random seeds. As shown in Figure 20, the standard deviation of the resulting loss values was approximately less than 0.001. This establishes a lower bound for prediction errors and highlights the prediction accuracy of the MPL discussed in Section 3. ', 'after_paragraph_idx': 49, 'before_paragraph_idx': 49}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2025 810 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '808 809 ', 'modified_lines': ' ', 'original_lines': 'Figure 10: Approximation of decay functions. A.3 THE SELECTION OF TRAINING SET We conduct ablation experiments over the loss curves in the training sets, including two-stage, co- sine, and constant LR schedules. We remove one of them and keep the other two as training sets. Then we fit over these subsets of the full training sets in the same approach with the multi-power law. The runs are over 25M models. The resulting coefficients are shown in Table 3 and the resulting test metrics are shown in Table 4. The test metrics are measured over the full test sets, including different schedule types and horizons. There are some observations as follows: • As shown in Table 3, the coefficients of different fittings are consistent overall, while there are a few parameters that vary, like C. We conjecture it indicates the redundancy in the current formula form. A refined form of multi-power law would be expected in future work. • As shown in Table 4, the multi-power law shows its robustness over the training sets, with comparable performances between the full-set fitting and the subset-fitting results. Items A B C α β γ Full Cosine + 2-stage Constant + 2-stage Constant + Cosine 0.507 0.5272 0.5279 0.5292 446.4 455.0 457.0 477.4 2.070 6.276 7.569 0.854 0.531 0.5032 0.5067 0.5041 0.406 0.3622 0.3613 0.3189 0.522 0.4172 0.4002 0.6256 L0 3.1 3.147 3.149 3.146 Table 3: Parameters for Different Fittings. “Full” denotes the fitting with the full training set, all three loss curves. Model R2 ↑ MAE ↓ RMSE ↓ PredE ↓ WorstE ↓ Full Cosine + 2-stage Constant + 2-stage Constant + Cosine 0.9975 0.9971 0.9976 0.9993 0.0039 0.0040 0.0037 0.0020 0.0046 0.0046 0.0045 0.0031 0.0012 0.0012 0.0011 0.0006 0.0040 0.0048 0.0039 0.0060 Table 4: Performance Metrics for Different Fittings. B SANITY CHECK ON DERIVATION AND OPTIMIZATION Sanity Check on Two(Multi)-Stage LR Schedule. We provide an empirical sanity check of our multi-power law in the case of the two-stage and multi-stage LR schedules. 15 0.00.20.40.60.81.0Normalized Steps0.00.20.40.60.81.0Normalized LR25M, 3600025M, 7200025M, 144000100M, 36000100M, 72000100M, 144000400M, 36000400M, 72000400M, 144000Average0.00.20.40.60.81.0Normalized Steps0.00.20.40.60.81.0Normalized LRAverage(1x)2(1x)321x1+cos(x)210x ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 EMPIRICAL VALIDATION OF THE MULTI-POWER LAW', 'after_section': None, 'context_after': 'peak learning rate (3 × 10−5 by default). The warmup phase spans 2,160 steps, but as the focus is on the post-warmup phase, only the post-warmup sections are used for fitting (Hu et al., 2024; Tissue et al., 2024). 0100020003000400050006000Step0.000.020.040.060.080.100.12Loss ReductionAverage Error: 2.75e-06loss reductionpredloss reductionpred510152025304060100A (x105)0100020003000400050006000Step0.000.020.040.060.080.10Loss ReductionAverage Error: 4.99e-06loss reductionpredloss reductionpred47121618222629B (x105) Under review as a conference paper at ICLR 2025 ', 'paragraph_idx': 41, 'before_section': None, 'context_before': 'one with a constant learning rate schedule, and one with a two-stage learning rate schedule. The default test set includes unseen learning rate schedule types, loss curves over longer horizons, and more extreme two-stage learning rate schedules. Detailed descriptions of the training and test sets ', 'modified_lines': 'are provided in Table 6. Unless otherwise specified, the ending learning rate is set to 1/10 of the 21 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 ', 'original_lines': 'are provided in Table 5. Unless otherwise specified, the ending learning rate is set to 1/10 of the 20 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Set ', 'paragraph_idx': 10, 'before_section': None, 'context_before': 'size is fixed at 128, and the sequence length is set to 4,096 across all configurations, resulting in 0.5M tokens per step. To simplify, data volume is described in terms of steps, where 10,000 steps consume 5B tokens. Validation loss is used as the default performance measure. Detailed model ', 'modified_lines': 'training hyperparameters are listed in Table 7, and a summary of the model series parameters used in the experiments is presented in Table 8. ', 'original_lines': 'training hyperparameters are listed in Table 6, and a summary of the model series parameters used in the experiments is presented in Table 7. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2025 1296 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '1294 1295 ', 'modified_lines': ' ', 'original_lines': 'Figure 16: Ablation over peak learning rates. Left: the learning rates of the schedules; Right: the loss curves of schedules. Updown: the first three rows are the results for the peak learning rate at 4 × 10−4 and the last three rows are for the peak learning rate at 6 × 10−4. For each set of the three rows, the first row shows the fitting on the training set, the second row shows the prediction over unseen schedules and the third row shows the extrapolation on a long horizon loss curve. 24 500010000150002000025000Step1234Learning Rate (x104)ConstCosineTwo-StageB=1.2×104500010000150002000025000Step3.43.63.8Losslosspred500010000150002000025000Step1234Learning Rate (x104)Two-StageB=4×105WSDWSDLD500010000150002000025000Step3.43.63.8Losslosspred01234567Step (x104)1234Learning Rate (x104)ConstCosine01234567Step (x104)3.23.43.63.84.0LossConst lossConst predCosine lossCosine pred500010000150002000025000Step246Learning Rate (x104)ConstCosine500010000150002000025000Step3.43.63.8Losslosspred500010000150002000025000Step246Learning Rate (x104)Two-StageB=6×105WSDWSDLD500010000150002000025000Step3.43.63.8Losslosspred01234567Step (x104)246Learning Rate (x104)ConstCosine01234567Step (x104)3.23.43.63.8LossConst lossConst predCosine lossCosine pred ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 THE MULTI-POWER LAW INDUCES BETTER LR SCHEDULES', 'after_section': None, 'context_after': 'Figure 19: The comparison between the optimized schedules with the WSD variants with end LR as 0. WSD (ZE) and WSDLD (ZE) represent the WSD and WSDLD variants with ending learning ', 'paragraph_idx': 48, 'before_section': None, 'context_before': '1401 1402 1403 ', 'modified_lines': ' Figure 17: Our optimized LR schedules and their loss curve compared with Cosine, WSD, and WSDLD schedules. The total step number is 24000. The decay step number of WSD and its variant is 4000. Upper: 25M; Lower: 100M; Left: Learning rates over step; Right: Losses over step. Figure 18: Left: optimized LR schedule vs Cosine LR schedule. The total step number is 72000, and model sizes range from 25M to 400M. Right: The loss curves of optimized schedules and Cosine schedules. 26 500010000150002000025000Step0.00.51.01.52.02.53.0Learning Rate (x104)CosineWSDLDWSDOpt500010000150002000025000Step3.43.63.84.0Loss19000200002100022000230003.263.283.303.323.34CosineWSDLDWSDOpt500010000150002000025000Step0.51.01.52.02.53.0Learning Rate (x104)CosineWSDLDWSDOpt500010000150002000025000Step3.03.23.43.63.8Loss19000200002100022000230002.942.962.983.003.023.04CosineWSDLDWSDOpt010000200003000040000500006000070000Step0.00.51.01.52.02.53.0Learning Rate (x104)CosineOpt(25M)Opt(100M)Opt(400M)LossLearning Rate5000055000600006500070000Step2.62.72.82.93.03.13.23.3Loss Under review as a conference paper at ICLR 2025 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '(A) where Ξ represents the hyper-parameter and undetermined constants in L(s), which is fixed in our setting to optimize η1, . . . , ηs. And for simplicity of derivation, we introduce η0 in front of η as the ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 's.t., 0 ≤ ηi ≤ ηi−1, ∀1 ≤ i ≤ T, ηi ≤ η0, ', 'modified_lines': '', 'original_lines': '27 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 500010000150002000025000Step0.00.51.01.52.02.53.0Learning Rate (x104)CosineWSDWSDSCLossLR3.253.503.754.004.254.50Loss1800019000200002100022000230003.253.3010000200003000040000500006000070000Step3.23.33.43.53.6Loss60000625006500067500700003.143.163.183.203.22Learning RateLossMulti-powerOne-power0.51.01.52.02.53.0Learning Rate (x104) Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Lemma 1. There exists at most 1 index i ∈ {1, 2, . . . , s} such that ∆i ̸= 0. proof. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '= 0 or δi = 0 is satisfied for all i ∈ {1, 2, . . . , s}. So now ∂∆i we can get a lemma below ', 'modified_lines': ' ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 ηmax cos( πt', 'after_section': None, 'context_after': '(cid:125) (cid:123)(cid:122) of number i+1 → 0 → · · · → 0 ', 'paragraph_idx': 16, 'before_section': None, 'context_before': 'law, which is η0 → · · · → η0 ', 'modified_lines': '(cid:124) ', 'original_lines': '(cid:124) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'covariance matrix Σ in the gradient noise. Specifically, if we make certain assumptions about the Hessian matrix H and the noise covariance matrix Σ, we can argue that the loss follows a multi- power law. Assumption 1. Let λi be the ith eigenvalue of H, and Σii be the element of Σ in the ith column i.i.d.∼ q(Σ) f or all i ∈ {1, 2, . . . , d}, where α > −1 and and ith row. λi ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'From spectra to scaling law for the loss. We now aim to analyze the scaling behavior of the loss for the quadratic loss function defined above during training. This behavior is typically determined by the eigenvalue spectrum of the Hessian and the spectrum of the diagonal elements of the noise ', 'modified_lines': '', 'original_lines': ' 29 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 ηmax cos( πt', 'after_section': None, 'context_after': '(cid:125) (cid:123)(cid:122) constant LR term ', 'paragraph_idx': 16, 'before_section': None, 'context_before': ', with the same setting in Theorem 2, we have the following estimate ˜Mt(θ0, E) := L0 + AS1(t)−α−2 ', 'modified_lines': '(cid:124) ', 'original_lines': '(cid:124) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '1 − exp(−2λiSk) 2 Σii, where Sk := (cid:80)T τ =k ητ , and the estimation error is bounded as ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'i=1 ', 'modified_lines': '', 'original_lines': '30 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 Under review as a conference paper at ICLR 2025 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 ηmax cos( πt', 'after_section': None, 'context_after': '(cid:125) (cid:123)(cid:122) =: ¯U (θk−1) = Eθk−1∼Φ(θ0,E≤k−1) ', 'paragraph_idx': 16, 'before_section': None, 'context_before': 'Proof. By the definition of Ak and Ak−1, we have Ak − Ak−1 = Eθk∼Φ(θ0,E≤k)[U (θk, ηk, Sk+1)] − Eθk−1∼Φ(θ0,E≤k−1)[U (θk−1, ηk−1, Sk)] Egk∼N (Hθk−1,Σ)[U (θk−1 − ηkgk, ηk, Sk+1) | θk−1] ', 'modified_lines': '(cid:124) ', 'original_lines': '(cid:124) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'exp(2) λi ηmaxΣii. Proof. By the update rule, we have E[θ2 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0,i exp(−2λi(S1 − Sk)) + ', 'modified_lines': '', 'original_lines': '32 ¯U (θk−1) = = = 1 2 1 2 1 2 d (cid:88) (cid:18) i=1 d (cid:88) (cid:18) i=1 d (cid:88) (cid:18) i=1 ¯U (θk−1) = = 1 2 1 2 d (cid:88) (cid:18) i=1 d (cid:88) (cid:18) i=1 + 1 2 d (cid:88) i=1 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'where the error bound ϵ can be bounded as d ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Σii + ϵ, ', 'modified_lines': '', 'original_lines': '33 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Next, to prove Theorem 2, we take the expectation of M (θ, E) over all λi and Σii as ηmaxE[Σ exp(−2λS1)] ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'k=1 Plugging in the expression of each Ak, we get the results in Theorem 3. ', 'modified_lines': '', 'original_lines': ' d (cid:88) i=1 Σiiλi. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-28 11:34:34
ICLR.cc/2025/Conference
bC6g8dERxz
jzv0RWj72m
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'k Sk(t). validate that after fitting the parameters of the law on at most 3 pretraining runs, it can predict Chinchilla scaling law, which relies solely on the final loss of each training run to fit its param- eters, our approach utilizes the entire loss curve of each training run to fit the parameters, thus significantly reducing the number of training runs and compute resources needed for accurate that by minimizing the predicted final loss according to the law, we can obtain an optimized LR schedule that outperforms the standard cosine schedule. Interestingly, the optimized schedule has a similar shape as the recently proposed WSD schedule (Hu et al., 2024), but its shape is two-stage schedules, we conduct a series of ablation studies on LR schedules with increasing complexity, which has helped us to gain strong insights into the empirical relationship between 2 ηmax + 1−α 2 ηmax cos( πt L(t) = Lconst(Z(t)) − (Lconst(Z(t)) − L(t)) (cid:125) (cid:124) ', 'paragraph_idx': 8, 'before_section': None, 'context_before': '(2) ', 'modified_lines': 'More specifically, LD(t) is linear with a cumulative sum of the LR reductions ηk−1 − ηk over time, scaled by a nonlinear factor G(η−γ k Sk(t)). This factor gradually saturates to a constant as the training progresses, which follows a power law in a scaled sum of learning rates η−γ We call this law of L(t) the Multi-Power Scaling Law (MPL) as it consists of multiple power-law forms. L0, A, B, C, α, β, γ are the parameters of the law and can be fitted by running very few pretraining experiments with different LR schedules. Our main contributions are as follows: 1. We propose the Multi-Power Law (1) for schedule-aware loss curve prediction, and empirically the loss curve for unseen LR schedules with remarkable accuracy (see Figure 2). Unlike the predictions (Figure 5). Extensive experiments are presented for various model architectures, sizes, and training horizons (Section 4). 2. Our Multi-Power Law is accurate enough to be used to search for better LR schedules. We show optimized so well that it outperforms WSD with grid-searched hyperparameters (Section 5). 3. We use a novel “bottom-up” approach to empirically derive the Multi-Power Law. Starting from the LR schedule and the loss curve (Section 3). 4. We present a theoretical analysis for quadratic loss functions and show that the Multi-Power Law can arise when the Hessian and noise covariance matrices have a power-law decay in their eigenvalues (Appendix B). 3 8000240000.501.001.502.002.503.008000120003.053.103.153.203.25800016000240002.753.003.253.503.754.0025M400M100M20000240002.702.752.802.852.90800024000Step0.501.001.502.002.503.008000120003.053.103.153.203.2580001600024000Step2.753.003.253.503.754.0025M400M100M20000240003.253.303.353.40Training SetTwo-StageB=9×105ConstCosineTest SetTwo-StageB=3×105Two-StageB=1.8×104WSDWSDLDLearning Rate (x104)LossLRLossPredictionLRLossPrediction Published as a conference paper at ICLR 2025 2 PRELIMINARY Learning Rate Schedule. A learning rate (LR) schedule is a sequence E := {η1, . . . , ηT } that specifies the LR at each step of the training process. For language model pretraining, the cosine LR schedule (Loshchilov & Hutter, 2017) is the most popular schedule, which can be expressed as ηt = 1+α T ). Here, ηmax is the peak LR and α is usually set to 0.1. The Warmup-Stable-Decay (WSD) schedule (Hu et al., 2024) is a recently proposed LR schedule. This schedule first goes through a warmup phase, then maintains at a stable LR ηmax with Tstable steps, and finally decays in the form of f (t − Tstable)ηmax for Tstable ≤ t ≤ Ttotal. Here f (x) ∈ (0, 1) can be chosen as linear or exponential decay functions. We visualize these two LR schedules in Figure 1(a). Warmup Phase. Many LR schedules, such as WSD, include a warmup phase in which the LR gradually increases from 0 to the peak LR ηmax over a few thousand steps. We denote the number of warmup steps as W . By default, the LR increases linearly, so the total LR sum during warmup is given by SW = 1 2 ηmaxW . Our analysis focuses on the training process after the warmup, where the LR is decaying in almost all LR schedules. We count training steps starting from the end of warmup and set t = 1 as the first step after warmup. Accordingly, {η1, . . . , ηT } represents the post-warmup schedule, and the LR at the last warmup step η0 = ηmax is the peak LR of the entire schedule. Power Law of Data Scaling Prior studies (Hoffmann et al., 2022; Kaplan et al., 2020) demonstrate that, for a fixed model size, the final loss follows a power law of the data size or, equivalently, the total training step number T in a constant-batch-size setting. This relationship is expressed as: L(T ) ≈ ˆL(T ) := L0 + ˜A · T −α, (3) where L0, ˜A, α are parameters to fit. This law is typically fitted over the final losses of a set of training curves generated from a specific LR schedule family, such as a cosine schedule with a given peak LR (ηmax), ending LR (αηmax) and warmup steps (W ). However, applying (3) directly to intermediate steps (t < T ) introduces bias, as the LR schedule up to t bears insufficient decay compared to the full schedule over T , resulting in different loss trajectories. This discrepancy is confirmed in Figure 5(b). We refer to (3) as the Chinchilla Data Scaling Law (abbreviated as CDSL) throughout the paper since it is simplified from the Chinchilla scaling law (Hoffmann et al., 2022) to highlight the data dimension. 3 EMPIRICAL DERIVATION OF THE MULTI-POWER LAW In this section, we present the empirical derivation of the Multi-Power Law (MPL) for schedule- aware loss curve prediction. Our key insights are summarized as follows: 1. If two training runs share the same sum of learning rates, (cid:80)T t=1 ηt, then their final losses tend to be similar, though a non-negligible discrepancy remains (Section 3.1). 2. In particular, for a training run with a given LR schedule, the final loss L(T ) is similar to that of another training run using a constant learning rate schedule with the same total LR sum. This motivates us to decompose L(T ) into two components: (1) the final loss of the corresponding constant LR run; and (2) a residual term that captures the effect of LR decay, defined as the difference between the final loss of the target run and the constant LR run. (Section 3.1) 3. Empirically, we observe that training runs with constant learning rates exhibit a Chinchilla-like power-law behavior in the loss curve and can thus be well approximated by a simple power law. (Section 3.2.1) 4. To approximate the residual term, instead of analyzing it directly, we imagine a sequence of training runs with schedules that gradually transition from a constant LR to the target schedule, all while maintaining the same total LR sum. Using a novel “bottom-up” approach, we derive an approximation formula for the loss difference introduced by each incremental change in the LR schedule, first by analyzing simple two-stage schedules and then extending the results to more complex schedules. (Sections 3.2.2 and 3.3) Finally, we sum up all the approximation terms above, leading to our MPL. Below, we elaborate on our approach in detail. 4 Published as a conference paper at ICLR 2025 Figure 3: A multi-stage schedule (Appendix A.2) example to illustrate the learning rate (LR) sum matching (Section 3.1) and fine-grained loss reduction decomposition (Section 3.2.2). The step points with the equal LR sum as the final step T9 = 8720 are marked and linked with the dash-point line. Each stage spans 90 steps. T1 = 8000, T2 = 8090, t(1) = ZT2 (T9), t(2) = ZT3 (T9). See Appendix G.3 for experiment details. Left: The actual multi-stage schedule and schedules for auxiliary processes. LR gap between adjacent points denotes the LR reduction ∆η(i) = η(i−1) − η(i). Right: Corresponding training curves for the multi-stage schedule and the auxiliary processes. The total loss reduction is LD(T9) and can be decomposed as the intermediate loss reduction sum. The loss gap between adjacent points denotes the stage-wise loss reduction LD(i)(t(i)). 3.1 OUR APPROACH: LEARNING RATE SUM MATCHING Auxiliary Training Process. As introduced above, we construct a series of auxiliary training runs with LR schedules gradually changing from a constant LR schedule to the target schedule E := {η1, . . . , ηT }. Our construction is detailed as follows. We define the l-th auxiliary process shares the first l steps of learning rates, {η1, . . . , ηl}, with the actual training process with LR schedule E, and continues with the constant LR ηl afterwards. The corresponding loss curve for the l-th auxiliary process is denoted as Ll(t). In particular, the 0-th auxiliary process shares only the warmup phase with the actual training process and uses a constant LR η0 = ηmax after warmup. We especially call it the constant process and use Lconst(t) to represent its loss curve. The T -th auxiliary process coincides with the actual training run with the target LR schedule, so LT (t) = L(t). Learning Rate Sum Matching Decomposition The Multi-Power Law (MPL) approximates the loss curve L(t) of the actual training process through the following decomposition. We define Z(t) as the equivalent step in a constant LR process that shares the same cumulative LR sum as the actual and S(t) = (cid:80)t process up to step t, where Z(t) = S(t) τ =1 ητ represents the sum of post-warmup η0 LRs. The loss at step t is then decomposed as: , ', 'original_lines': 'More specifically, LD(t) is the sum of the LR reduction ηk−1 − ηk at each step k multiplied by a nonlinear factor. The factor gradually saturates to a constant as the training progresses, and the speed of saturation follows a power law in a scaled sum of LRs η−γ We call this law of L(t) the multi-power scaling law as it consists of multiple power-law forms. See also Appendix E.3 for the practical version of our law that accounts for the warmup phase. L0, A, B, C, α, β, γ are the parameters of the law and can be fitted by running very few pretraining experiments with different LR schedules. We summarize our main contributions as follows: 1If there is a LR warmup phase in the schedule, we focus on the decay phase right after the warmup phase. 2 500010000150002000025000Step0.00.51.01.52.02.53.0Learning Rate (LR x 104)1750018000185002.93.03.1WSDCosineWSD (Tuned)WSDLD (Tuned)Opt500010000150002000025000Step2.83.03.23.43.6Loss22000230002.722.74WSDCosineWSD (Tuned)WSDLD (Tuned)Opt Under review as a conference paper at ICLR 2025 Figure 2: Loss Curves of 25M, 100M, and 400M models from up to down. (a) Fit on Training Set: Our multi-power law is reducible to two-stage and Constant LR schedules, and captures Cosine LR decay effects. (b) Prediction on Test Set: Our law generalizes to unseen schedules like WSDLD and WSD, and handles steep decays in Two-Stage cases. 1. We propose the multi-power law (1) for schedule-aware loss curve prediction, and empirically the loss curve for unseen LR schedules with remarkable accuracy (see Figure 1). Unlike the predictions (Figure 7). Extensive experiments are presented for various model architectures, sizes, and training horizons (Section 3). 2. Our multi-power law is accurate enough to be used to search for better LR schedules. We show optimized so well that it outperforms WSD with grid-searched hyperparameters (Section 4). 3. We use a novel “bottom-up” approach to empirically derive the multi-power law. Starting from the LR schedule and the loss curve (Section 2). 4. We present a theoretical analysis for quadratic loss functions and show that the multi-power law can arise when the Hessian and noise covariance matrices have a power-law decay in their eigenvalues (Appendix K). 2 EMPIRICAL DERIVATION OF THE MULTI-POWER LAW In this section, we present our empirical derivation of the multi-power law for schedule-aware loss curve prediction. In the first place, we reduce the problem to studying a loss reduction term led by LR decay. Then we take a “bottom-up” approach to study this term for LR schedules with increasing complexity, from two-stage, multi-stage, to general LR decay schedules. For the first two cases, we conduct extensive ablation studies on the behavior of the training loss and derive formulas that can accurately predict the loss reduction term. This has finally inspired us to propose the multi-power law for general cases as a natural unification and generalization of the formulas derived for the two special cases. We will further validate our law with extensive experiments in Section 3. Background: Learning Rate Schedule. An LR schedule is a sequence E := {η1, . . . , ηT } that specifies the LR at each step of the training process. In the domain of language model pretraining, the cosine LR schedule (Loshchilov & Hutter, 2017) is the most popular one, which can be expressed as ηt = 1+α T ), where ηmax is the maximum LR and α is usually set to 0.1. The Warmup-Stable-Decay (WSD) schedule (Hu et al., 2024) is a recently proposed LR schedule. This schedule first goes through a warmup phase with W steps, then maintains at a stable LR ηmax with Tstable steps, and finally decays in the form f (s − Tstable)ηmax during stage Tstable≤s≤Ttotal. Here f (x) ∈ (0, 1) can be chosen as linear or exponential decay functions. The visualization of these two LR schedules is in Figure 1(a). Background: Warmup Phase. Many LR schedules, such as WSD, contain a warmup phase that increases the LR gradually from 0 to the maximum LR ηmax over a few steps. Our discussion focuses 3 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 500010000150002000025000Step123Learning Rate (x104)ConstCosineTwo-StageB=9×105500010000150002000025000Step3.03.54.0Losslosspred500010000150002000025000Step123Learning Rate (x104)Two-StageB=3×105WSDWSDLD500010000150002000025000Step3.03.54.0Losslosspred Under review as a conference paper at ICLR 2025 (a) LR vs Step t (b) Loss vs Step t Figure 3: Example of Two-Stage cases: tB=11000, xB=3000, ηB = 9 × 10−5, ηA = 3 × 10−4, TA = 8000. (a) A and B have the equal LR sums: xA = 900, tA = 8900. (b) Loss Reduction LD(TA + xB) = LA(tA) − LB(tB). (c) Fitting Loss Reduction ˆLD(TA + xB) with power form results in 0.13(1 − (1 + 0.21x)0.15); Fitting with exponential form results in 0.0790(1 − e−0.01x). The shape of loss reduction is closer to a power form instead of exponential. (c) Loss Reduction vs Step x on the training after the warmup, where the LR is non-increasing in almost all LR schedules. The steps are counted after the warmup phase, i.e., t = 1 is the first step after the warmup. 2.1 OUR APPROACH: LEARNING RATE SUM MATCHING Auxiliary Training Process. We first introduce an auxiliary training process to aid our analysis of the loss curve of the actual training process with LR schedule E := {η1, . . . , ηT }. This auxiliary training process is exactly the same as the actual training process for the first K steps, where K is the largest number such that η1 = η2 = · · · = ηK. Then the auxiliary training process continues training with a constant LR schedule, where the LR is set to η1 for all the remaining steps. We denote the training loss at step t in this auxiliary process as Lconst(t). Learning Rate Sum Matching. The multi-power law for approximating the loss curve L(t) of the actual training process is based on the following decomposition. Define Z(t) as the step in the auxiliary process that has the same sum of LRs as the actual training process at step t. Then, , where Z(t) := ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 − (Cη1−γ', 'after_section': None, 'context_after': 'Table 1: Model performance comparison. R2, MAE, RMSE, PredE, and WorstE are the coefficient of determination, Mean Absolute Error, Root Mean Square Error, Prediction Error, and Worst-case Error, respectively. ', 'paragraph_idx': 29, 'before_section': None, 'context_before': 'Llama2 pretraining (70B model, 2T tokens), consistent with trends favoring higher data volumes for fixed model sizes (Dubey et al., 2024). ', 'modified_lines': 'Generalization to Non-monotonic Schedules. MPL extends effectively to complex non- monotonic schedules, although derived for monotonic decay schedules. We test the fitted MPL over challenging cases such as cyclic schedules and the random-polyline schedule, where LR val- ues are randomly selected at every 8,000 steps and connected by a polyline. These experiments, conducted on a 25M-parameter model over 72,000 steps, also represent a demanding long-horizon scenario. As shown in Figure 4, MPL accurately predicts these long-horizon non-monotonic sched- ules, demonstrating its robustness and adaptability. 4.2 COMPARISON WITH BASELINES Comparison with Chinchilla Law. While Chinchilla-style data scaling laws, which we abbreviate as CDSLs, are widely adopted (Muennighoff et al., 2023; Hoffmann et al., 2022), MPL offers several distinct advantages: (1) MPL incorporates LR dependency, unlike CDSLs, and (2) MPL predicts the entire loss curve, whereas CDSLs are limited to estimating only the final loss. These advantages enable MPL to achieve higher sample efficiency than CDSLs. Notably, we demonstrate that a single constant and cosine schedule curve suffices to fit MPL with strong generalization. As illustrated in Figure 5(a), MPL reduces final loss prediction to less than 1/3 that of CDSLs while requiring about 1/5 compute budget. Furthermore, MPL excels in fitting the open-source 7B OLMo (Groeneveld 8 024681012Step (x104)2.62.83.03.2Loss12.5012.7513.002.5502.575Loss Curves(C)Loss Curves(M)Loss Ends(C)Loss Ends(M)Pred(C)Pred(M)Loss Curve(Test)Target Loss012345Step (x105)2.02.12.22.32.42.5LossLossMulti-powerChinchilla0.51.01.52.02.53.0Learning Rate (x104)Linear Schedule Published as a conference paper at ICLR 2025 ', 'original_lines': 'Generalization to Non-Monotonic Schedules. MPL extends effectively to complex non- monotonic schedules, although derived for monotonic decay schedules. The test set includes chal- lenging cases such as cyclic schedules and the random-polyline schedule, where LR values are randomly selected at every 8,000 steps and connected by a polyline. These experiments, conducted 7 10000200003000040000500006000070000Step3.23.33.43.5Loss60000625006500067500700003.203.223.243.263.283.30Learning RateLossMulti-powerOne-power0.00.51.01.52.02.53.0Learning Rate (x104)10000200003000040000500006000070000Step3.23.33.43.5Loss60000625006500067500700003.203.223.243.263.28Learning RateLossMulti-powerOne-power0.51.01.52.02.53.0Learning Rate (x104) Under review as a conference paper at ICLR 2025 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 Figure 7: Left: Predictions for target loss at 128,000-step for cosine schedule using MPL and CDSL fitting. The CDSL uses the final losses of six cosine losses from 14960 steps to 72000 steps, marked as Loss Ends(C). The MPL uses 24000-step constant and cosine curves, marked as Loss Curves(M). Right: Comparison of MPL and CDSL fits on the open-source 7B OLMo curve generated with a linear schedule. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Momentum Law Momentum Law 0.9904 0.9975 ', 'paragraph_idx': 10, 'before_section': None, 'context_before': '400M Momentum Law ', 'modified_lines': 'Multi-Power Law (Ours) Multi-Power Law (Ours) Multi-Power Law (Ours) ', 'original_lines': 'Multi-power Law (Ours) Multi-power Law (Ours) Multi-power Law (Ours) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 − (Cη1−γ', 'after_section': None, 'context_after': 's.t. 9 10 REFERENCES Armen Aghajanyan, Lili Yu, Alexis Conneau, Wei-Ning Hsu, Karen Hambardzumyan, Susan Zhang, Stephen Roller, Naman Goyal, Omer Levy, and Luke Zettlemoyer. Scaling laws for gen- 265–279. PMLR, 2023. Alexander Atanasov, Jacob A Zavatone-Veth, and Cengiz Pehlevan. Scaling and renormalization in ', 'paragraph_idx': 35, 'before_section': None, 'context_before': '0.0100 0.0070 ', 'modified_lines': 'Table 2: Downstream performance comparison for the cosine and our optimized schedules. Percent- age changes (↑ or ↓) indicate relative improvements or regressions compared to the cosine schedule. Schedule LAMBADA HellaSwag PIQA ARC-E C3 RTE Cosine Optimized 46.54 48.71 (↑ 2.17%) 37.12 37.74 (↑ 0.62%) 65.13 65.07 (↓ 0.06%) 43.56 44.09 (↑ 0.53%) 48.44 50.30 (↑ 1.86%) 52.71 53.79 (↑ 1.08%) et al., 2024), as shown in Figure 5(b). Additional details of the comparison with Chinchilla Law are provided in Appendix H.2. Comparison with Momentum Law. The MPL outperforms the recently proposed Momentum Law(MTL) (Tissue et al., 2024)1 in both accuracy and applicability to discontinuous learning rate schedules. While MTL incorporates LR annealing effects by modeling loss reduction through the momentum of LR decay, it indicates an exponential loss reduction for two-stage LR schedules, in- consistent with our observations (see Appendix A.1). Across the diverse schedules in the test set, MPL consistently outperforms MTL in both average and worst-case prediction accuracy, as sum- marized in Table 1. Additionally, for WSD schedules with linear LR decay, MPL more accurately captures the loss reduction trend during the decay stage, as highlighted in Figure 14(b), compared to MTL. Further details on MTL and its relationship to MPL can be found in Appendix C, with fitting specifcs provided in Appendix H.2. 5 THE MULTI-POWER LAW INDUCES BETTER LR SCHEDULES Due to the high cost of each pretraining run and the curse of dimensionality for LR schedules, it is generally impractical to tune the LR for every training step. To address this, we propose leveraging the Multi-Power Law (MPL) to predict the final loss as a surrogate function to optimize the entire LR schedule, achieving a lower final loss and outperforming the cosine schedule and WSD variants. 5.1 METHOD The Multi-Power Law (MPL) provides an accurate loss estimation, enabling its final loss prediction to serve as a surrogate for evaluating schedules. We represent the learning rate (LR) schedule as a T -dimensional vector E = (η1, . . . , ηT ), with the final loss denoted as L(E) under given hyper- parameters. Our goal is to find the optimal LR schedule E∗ = arg minE L(E). Using MPL, we parameterize the predicted final loss as LΘ(E) with parameters Θ = {L0, A, B, C, α, β, γ}, esti- mated as outlined in Section 4. We approximate E∗ by optimizing the surrogate loss L ˆΘ(E) subject to monotonicity constraints: ˆE = min E L ˆΘ(E) 0 ≤ ηt ≤ ηt−1, ∀ 1 ≤ t ≤ T. (15) This optimization induces an “optimal” schedule ˆE derived from MPL with parameter ˆΘ. We set the peak LR η0 = 3 × 10−4 and assume ηt is monotonically non-increasing, reflecting established training practices. We view E as a high-dimensional vector and optimize it using the Adam opti- mizer. Further details are provided in Appendix I. Results for a 400M model are shown in Figure 1, with additional experiments for 25M and 100M models in Figure 18. 1Concurrent work. Early versions of our work are available at https://openreview.net/pdf?id= KnoS9XxIlK(October 2024). Published as a conference paper at ICLR 2025 (a) Long-Horizon Prediction of MPL (b) Loss Curves Comparison for 1B Models Figure 6: (a) Long-horizon loss predictions using MPL for cosine and constant schedules, with model sizes ranging from 25M to 1B (top to bottom). (b) Loss curve comparison for 1B models across the optimized sched- ule (Opt), cosine schedule (Cosine), and simplified optimized schedule (WSDSC, see Section 5.2), featuring a WSD schedule with sqrt-cube decay. 5.2 RESULTS Optimized LR Schedule Exhibits Stable-Decay Pattern. The optimized LR schedule follows a Warmup-Stable-Decay (WSD) structure, comprising two main post-warmup phases: a stable phase with a constant peak LR, and a decay phase ending with a lower LR, as illustrated in Figures 1 and 18. By contrast, the momentum law (Tissue et al., 2024) theoretically yields a collapsed learning rate schedule, as proved in Appendix J. However, unlike traditional WSD schedules (Hu et al., 2024), which decays linearly or exponentially to 1/10 of the peak LR, our optimized schedule reaches lower ending learning rates, typically below 1/20 of the peak, even close to zero. Using normalized steps ˜t and normalized learning rates ˜ηavg, We find that the decay function of the optimized schedule roughly follows ˜ηavg = (1 − ˜t)1.5, capturing the near-zero ending LR (˜t = 1, ˜ηavg = 0). Optimized LR Schedule Outperforms Cosine Schedules. Across comparison experiments of different model sizes and training steps, our optimized schedules consistently outperform the co- sine schedules, achieving a margin exceeding 0.02. Notably, no WSD-like schedule is present in the training set, highlighting MPL’s extrapolation capability. Figure 19 extends this comparison to longer training horizons and Figure 6(b) validates the superiority for 1B model. we further validate the effectiveness of our optimized schedules by evaluating the downstream task performance. As shown in Table 2, our optimized schedule leads to overall improvements in downstream tasks against the cosine schedules, showing practical gains from loss improvements. Ablation details for longer horizons and larger models are in Appendix I. Optimized LR Schedule Outperforms Tuned WSD Variants. For a 400M model, the decay step of a 24000-step optimized schedule (Figure 1) is close to the optimally tuned step ( 6,000) for WSD and WSDLD schedules, determined via grid search over {3,000, 4,000, 5,000, 6,000, 7,000}. However, it surpasses these decay-ratio-tuned variants, suggesting that tuning the decay ratio alone is insufficient. Adjusting the ending LR to near-zero (see Appendix I) or altering the decay function also falls short. We propose a WSD variant with sqrt-cube decay (WSDSC), whose decay function is ˜ηavg = (1 − ˜t)1.5. WSDSC is effective across various model sizes and architectures, as evidenced in Figures 6(b) and 15(a), offering an alternative decay function for WSD schedules. Yet, it still falls short of the optimized schedule (Figure 6(b)), possibly due to untuned decay ratios. See Appendix I for more details. 6 CONCLUSIONS This paper proposes the Multi-Power Law (MPL) to capture the relationship between loss and LR schedule, derived bottom-up from stage-wise schedules using LR sum matching decomposition and. The fitted MPL can accurately predict the entire loss curve while reducing the computational cost of fitting compared to traditional scaling laws. Through a theoretical analysis of a quadratic loss function, We discuss the possible underlying mechanism for MPL. Furthermore, we get optimized schedules via minimizing the predicted final loss of MPL, and extensively validate their superiority over commonly used schedules, thereby improving training efficiency. 0246Step (x104)2.53.03.54.0Loss25M100M400M1BConst lossConst predCosine lossCosine pred0246Step (x104)2.42.62.83.03.2Loss672.3752.4002.4250.00.51.01.52.0Learning Rate (x104)CosineOpt(Ours)WSDSC(Ours)LossLRCosineOpt(Ours)WSDSC(Ours)LossLR Published as a conference paper at ICLR 2025 In International Conference on Machine Learning, pp. erative mixed-modal language models. ', 'original_lines': 'on a 25M-parameter model over 72,000 steps, also represent a demanding long-horizon scenario. As shown in Figure 6, MPL accurately predicts these long-horizon non-monotonic schedules, demon- strating its robustness and adaptability. 3.2 COMPARISON WITH BASELINES Comparison with Chinchilla Law. While Chinchilla-style data scaling laws, which we abbrevi- ate as CDSLs, are widely utilized (Muennighoff et al., 2023; Hoffmann et al., 2022), MPL offers several distinct advantages: (1) MPL incorporates LR dependency, unlike CDSLs, and (2) MPL predicts the entire loss curve, whereas CDSLs are restricted to final loss predictions. Based on these advantages, the MPL shows higher sample efficiency than the CDSLs. Moreover, we find that two curves of different schedules are enough to fit the MPL with generalizability, as details are discussed in Appendix A.3. To assess MPL’s sample efficiency and prediction accuracy, we conducted two experiments. First, as shown in the left panel of Figure 7, MPL was fitted to two 24,000-step loss curves and used to predict the final loss of a 128,000-step loss curve trained with a cosine schedule. MPL achieved lower extrapolation error while using only one-quarter of the computational resources required for CDSL fitting. Second, we evaluated MPL and CDSL fits on the 7B OLMo (Groeneveld et al., 2024) curve trained with a linear schedule. As shown in the right panel of Figure 7, MPL aligned closely with the OLMo training loss, whereas the CDSL fit showed significant deviations. Comparison with Momentum Law. The MPL shows higher accuracy and can apply to the dis- continuous schedules compared to the recent Momentum Law (Tissue et al., 2024). The Momentum Law (MTL) (Tissue et al., 2024) incorporates LR annealing effects by modeling loss reduction based on the momentum of LR decay. However, MTL indicates an exponential loss reduction in two-stage LR schedules, which contradicts our observations (see Figure 3). Additionally, as shown in Figure 20, MPL outperforms MTL in predicting loss reduction for WSD schedules with linear LR decay. In the highlighted regions, MPL achieves high accuracy in the decay stage, whereas MTL exhibits substantial error. A summary of prediction results across test sets is provided in Table 1, where MPL consistently outperforms MTL in both average and worst-case scenarios. The details of the MTL and its relation to the MPL can be found in Appendix A.1. 4 THE MULTI-POWER LAW INDUCES BETTER LR SCHEDULES Due to the high cost of each pretraining run and the curse of dimensionality for LR schedules, it is generally very impossible to tune the LR for every training step. However, in this section, we 8 020000400006000080000100000120000Steps2.62.72.82.93.03.13.2Loss1240001260001280001300002.5502.575Loss CurvesChinchilla PredMulti-power PredLoss Ends(C)Loss Curves(M)Target Loss0100000200000300000400000500000Step2.02.12.22.32.42.5LosslossMulti-powerChinchilla0.51.01.52.02.53.0Learning Rate (x104)Linear Schedule Under review as a conference paper at ICLR 2025 show that by using the predicted final loss from the MPL, we can optimize the entire LR schedule to significantly reduce the final loss and beat the cosine schedule. 4.1 METHOD Given that the Multi-Power Law (MPL) provides an accurate estimation of the loss, the final loss prediction by MPL can serve as a surrogate for evaluating schedules. Consider the learning rate (LR) as a T -dimensional vector η = (η1, . . . , ηT ) and the final loss L(η) under given hyperparameters. The goal is to identify the optimal LR schedule η∗ = arg maxη L(η). We parameterize the final loss prediction as LΘ(η) using MPL with parameters Θ = {L0, A, B, C, α, β, γ}. The parameters ˆΘ can be estimated as described in Section 3. Using L ˆΘ(η) as a surrogate for L(η), we approximate η∗ by solving: ˆη = min 0 ≤ ηi+1 ≤ ηi, ∀ 1 ≤ i ≤ T − 1. (12) L ˆΘ(η) η This process induces an “optimal” schedule ˆη derived from MPL with parameter ˆΘ. We set the initial learning rate η0 to 3 × 10−4 and assume ηi is monotonically non-increasing based on prior knowledge. The high-dimensional vector η is optimized using the Adam optimizer. Additional details are provided in Appendix H. 4.2 RESULTS Induced LR Schedule Exhibits Stable-Decay Behavior. The induced learning rate schedule fol- lows a Warmup-Stable-Decay (WSD) pattern, comprising two main stages after the warmup phase. It maintains a peak LR for an extended period, followed by a rapid decay to a near-zero LR, as shown in Figure 1 and Figure 17. Induced LR Schedule Outperforms Cosine Schedule. Figures 1 and 17 compare the induced schedules with the cosine and WSD schedules across models ranging from 25M to 400M. Figure 18 extends this comparison to longer training horizons. The induced schedules consistently outperform the cosine schedule, achieving a margin over 0.02, which is significant given the limited training scale. Notably, no WSD-like schedule is present in the training set, making the prediction of such loss curves an extrapolation by MPL. Characteristics of the Induced Schedules. The induced schedules provide insights into hyperpa- rameter tuning for WSD schedules. Observations from Figures 1 and 17 highlight the following: 1. Lower Ending LR Compared to Common Practice. Prior research (Hoffmann et al., 2022; Kaplan et al., 2020; H¨agele et al., 2024) recommends an ending LR of 1/10 of the peak LR. However, our findings suggest that a lower ending LR—typically below 1/20 of the peak LR—is more effective in most scenarios. The impact of the ending LR appears to vary based on the de- cay function used. For example, as illustrated in Figure 19, WSD schedules with linear decay benefit from lower ending LRs, whereas those employing exponential decay result in higher losses. Overall, the optimized schedule consistently demonstrates superior performance com- pared to these variants. Further details are provided in Appendix H. 2. Final Decay Resembles Power Decay. The relationship between normalized steps ˜t and nor- malized LRs ˜ηavg is well captured by f (x) = (1 − x)−α, where α ≈ 1.5 in our experiments. This simplified version, referred to as WSD with Sqrt-Cube Decay (WSDSC), is effective across various model sizes and types, as shown in Figures 8 and 21. While WSDSC does not match the optimized schedule, it simplifies application and informs schedule design. See Section A.2. 3. Alignment with Optimal Decay Ratios. The induced schedules align closely with the optimal decay steps identified via grid search, as illustrated in Figure 1. See Section H for details. 5 DISCUSSION In this section, we conduct experiments over hyper-parameters to check the applicability range of the multi-power law (MPL). The hyperparameters include the model types, model sizes, peak learning rates, and random seeds. In addition to empirical results, we can theoretically derive a multi-power law under a case with a quadratic loss function, providing insight into the nature of the MPL. Model Types. We validate the MPL on GPT-2 (Radford et al., 2019) and OLMo (Groeneveld et al., 2024) models to evaluate the generalizability of the MPL across model architectures. In the preceding experiments, we used the Llama2 (Touvron et al., 2023). For experiments on GPT-2, the validation process followed the procedure fit with curves of cosine and constant schedules, described 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 Under review as a conference paper at ICLR 2025 Figure 8: Left: Long horizon prediction for the cosine and constant schedules. From up to down, the model sizes range from 25M to 1B. Right: The comparison on 1B models between the opti- mized schedule (Opt), cosine schedule (Cosine), and the simplified optimized schedule (WSDSC, see Section 4.2), a WSD schedule with sqrt-cube decay. Table 2: Downstream performance comparison for Cosine and induced schedules. Percentage changes (↑ or ↓) indicate relative improvements or regressions compared to the Cosine schedule. ARC-E Downstream Dataset LAMBADA HellaSwag PIQA Cosine Schedule Induced Schedule 46.54 48.71 (↑ 2.17%) 37.12 37.74 (↑ 0.62%) 65.13 65.07 (↓ 0.06%) 43.56 44.09 (↑ 0.53%) in Section 3. For the 7B OLMo model, we fit the MPL on the open-source training curve, which employs a linear decay schedule, as shown on the right of Figure 7. Our results show that the MPL presents a high prediction accuracy across different model types for both self-run and open-source experiments. Details see Appendix I. Model Size. We extended the MPL and its induced schedule to a larger scale by training a 1B- parameter model on 144B data tokens. The MPL was fitted over 24,000 steps and successfully predicted loss curves up to 72,000 steps, as shown in Figure 8. We tested the performance of the induced 72,000-step schedule and its simplified version (see Section 4.2) against the widely used co- sine schedule. The induced schedule outperformed the cosine schedule, while the simplified version achieved results between the induced and cosine schedules. To further validate the effectiveness of the induced schedules, we compared downstream task performance for models trained using the co- sine and induced schedules. As shown in Table 2, the induced schedule led to overall improvements in downstream tasks. Details see Appendix I. Peak Learning Rate Ablation. We evaluated the applicability of the MPL across different peak learning rates. In previous experiments, the peak learning rate was fixed at 3 × 10−4. However, as shown in Figure 4, the empirical behavior of two-stage learning rate schedules deviates when the peak learning rate increases. To investigate this, we conducted experiments with peak learning rates of 4 × 10−4 and 6 × 10−4. The MPL achieved an average R2 value of 0.9965 for the 4 × 10−4 case and 0.9940 for the 6 × 10−4 case, demonstrating consistently high accuracy. Details see Appendix I. Batch Size Ablation. We conduct ablation experiments on sequence batch sizes of 64 and 256 over 25M models, apart from 128 in previous experiments. The MPL presents a consistent accuracy with R2 higher than 0.9970. See Appendix I. Random Seed. We performed an ablation study to examine the impact of random seed variability on curves. We trained a 25M-parameter model for 24,000 steps using the cosine schedules with three random seeds. As shown in Figure 20, the standard deviation of the resulting loss values was approximately less than 0.001, establishing a lower bound for prediction errors. It highlights the prediction accuracy of the MPL discussed in Section 3. Theoretical Results. We also present a theoretical analysis for quadratic loss functions optimizing Gradient Descent (GD) with noise. We can prove that the multi-power law arises when the Hessian and noise covariance matrices follow a power-law distribution in their eigenvalues. See Appendix K for more explicit derivation. 6 CONCLUSIONS AND FUTURE DIRECTIONS In this paper, we introduce the multi-power law for scheduler-aware loss curve prediction, accurately predicting loss curves and inspiring optimal scheduler derivation. Future work includes refining the law, exploring its underlying mechanisms, and studying the LR relationship with unfixed max LR. Our findings enhance understanding of training dynamics in large language models, potentially improving training efficiency. 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 01234567Step (x104)2.53.03.54.0LossConst lossConst predCosine lossCosine pred010000200003000040000500006000070000Step0.00.51.01.52.0Learning Rate (x104)CosineOptWSDSCLossLR2.42.62.83.03.2Loss550006000065000700002.3752.4002.425 Under review as a conference paper at ICLR 2025 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 erative mixed-modal language models. In International Conference on Machine Learning, pp. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 − (Cη1−γ', 'after_section': None, 'context_after': 'LD(t) = 0 (B = 0) G(x) = 1 ', 'paragraph_idx': 31, 'before_section': None, 'context_before': 'MEL MTL MPL ', 'modified_lines': '(Ours) ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '0.0378 0.0077 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0.9921 0.9934 0.9904 ', 'modified_lines': '', 'original_lines': '0.9975 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '0.0412 0.0101 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0.0066 0.0044 0.0047 ', 'modified_lines': '', 'original_lines': '0.0039 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '0.0111 0.0023 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0.0075 0.0057 0.0060 ', 'modified_lines': '', 'original_lines': '0.0046 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '0.0241 0.0108 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0.0020 0.0013 0.0014 ', 'modified_lines': '', 'original_lines': '0.0012 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'U (θ, η, S) := θ2 i λi exp(−2λiS) + ηΣii · ', 'paragraph_idx': 8, 'before_section': None, 'context_before': 'i=1 To prove the theorem, we first introduce some notations and auxiliary expectations. WLOG, we ', 'modified_lines': 'assume that H = diag(λ1, . . . , λd), and set θ∗ = 0. And we define that (cid:18) (cid:19) ', 'original_lines': 'assume that H = diag(λ1, . . . , λd). And we define that 1 2 d (cid:88) i=1 (cid:18) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'where the error term ϵk is bounded by ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '2 Σii + ϵk, ', 'modified_lines': '', 'original_lines': ' 33 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'If S1(t) > 1 ηmax ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'L(θT ) ← L(θt), Si ← Si(t). ', 'modified_lines': '', 'original_lines': 'So we complete the proof of Theorem 2. L.1 PROOF OF COROLLARY 1 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '1 C ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Rηmax(S1(t) + ', 'modified_lines': '', 'original_lines': 'It completes the proof. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-01 15:49:52
ICLR.cc/2025/Conference
jzv0RWj72m
tMqFytuokK
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'Other schedules include the cyclic (Smith, 2017), Noam (Vaswani, 2017), and Warmup-Stable- Decay (WSD) schedules (Hu et al., 2024), but there is no consensus on the optimal choice. ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': '2012). These LR schedules sometimes include a warmup phase at the beginning, where the LR linearly increases from zero to a large value over a few thousand steps, and only after this warmup phase does the LR start to decay. The most commonly used LR schedule in LLM pretraining is ', 'modified_lines': 'the cosine schedule (Loshchilov & Hutter, 2016), which decays the LR following a cosine curve. ', 'original_lines': 'the cosine schedule (Loshchilov & Hutter, 2017), which decays the LR following a cosine curve. ', 'after_paragraph_idx': 5, 'before_paragraph_idx': 5}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'In this paper, we aim to quantify how LR schedules influence the evolution of training loss in LLM pretraining through empirical analysis. More specifically, we study the following problem, which we ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': 'does not generalize well to other LR schedules, or even to the same schedule with early stopping. Moreover, existing scaling laws lack a term to account for LR schedules, limiting their ability to provide practical guidance on setting the LRs. This issue can become even more pronounced when ', 'modified_lines': 'scaling up training to trillions of tokens (Dubey et al., 2024; Liu et al., 2024), where the extreme cost of training makes it impractical to experiment with multiple LR schedules. ', 'original_lines': 'scaling up training to trillions of tokens (Dubey et al., 2024; DeepSeek-AI et al., 2024), where the extreme cost of training makes it impractical to experiment with multiple LR schedules. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 6}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'LD(t) := B ', 'paragraph_idx': 10, 'before_section': None, 'context_before': 'can accurately predict the loss curves of unseen schedules, including WSDLD, WSD, and two-stage schedules with a different LR in the second stage. See Table 1 for evaluation metrics. ', 'modified_lines': 'Here, SW denotes the sum of learning rates used in the warmup phase. The first two terms L0 + A · (S1(t) + SW )−α can be viewed as an extension of the Chinchilla scaling law by replacing the number of steps T with the cumulative sum of learning rates up to step t, while neglecting the dependence on the model size. While this alone provides a crude approximation of the loss curve by linearizing the contribution of the LR at each step (see Section 3.1 for further discussion), it does not account for the specific shape of the LR decay. The additional term LD(t) serves as a correction term, which captures the effect of LR decay in further reducing the loss: ', 'original_lines': 'Here, SW denotes the sum of learning rates used in the warmup phase. The expression L0 + A · (S1(t)+SW )−α can be viewed as an extension of the Chinchilla scaling law by replacing the number of steps T with the cumulative sum of learning rates up to step t, while neglecting the dependence on the model size. While this alone provides a crude approximation of the loss curve by linearizing the contribution of the LR at each step (see Section 3.1 for further discussion), it does not account for the specific shape of the LR decay. The additional term LD(t) serves as a correction term, which captures the effect of LR decay in further reducing the loss: ', 'after_paragraph_idx': 10, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': '3 ', 'paragraph_idx': 10, 'before_section': '1 INTRODUCTION', 'context_before': 'complexity, which has helped us to gain strong insights into the empirical relationship between the LR schedule and the loss curve (Section 3). ', 'modified_lines': '4. We provide a theoretical analysis for quadratic loss functions and demonstrate that the Multi- Power Law emerges when the Hessian and noise covariance matrices exhibit certain types of power-law structures (Appendix B). ', 'original_lines': '4. We present a theoretical analysis for quadratic loss functions and show that the Multi-Power Law can arise when the Hessian and noise covariance matrices have a power-law decay in their eigenvalues (Appendix B). ', 'after_paragraph_idx': 10, 'before_paragraph_idx': 10}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'ηt = 1+α T ). Here, ηmax is the peak LR and α is usually set to 0.1. The Warmup-Stable-Decay (WSD) schedule (Hu et al., 2024) is a recently proposed LR schedule. This ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'Learning Rate Schedule. A learning rate (LR) schedule is a sequence E := {η1, . . . , ηT } that specifies the LR at each step of the training process. For language model pretraining, the cosine ', 'modified_lines': 'LR schedule (Loshchilov & Hutter, 2016) is the most popular schedule, which can be expressed as ', 'original_lines': 'LR schedule (Loshchilov & Hutter, 2017) is the most popular schedule, which can be expressed as ', 'after_paragraph_idx': None, 'before_paragraph_idx': 5}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'reduction sum. The loss gap between adjacent points denotes the stage-wise loss reduction LD(i)(t(i)). 3.1 OUR APPROACH: LEARNING RATE SUM MATCHING Auxiliary Training Process. As introduced above, we construct a series of auxiliary training runs with LR schedules gradually changing from a constant LR schedule to the target schedule E := Learning Rate Sum Matching Decomposition The Multi-Power Law (MPL) approximates the loss curve L(t) of the actual training process through the following decomposition. We define Z(t) ', 'paragraph_idx': 10, 'before_section': None, 'context_before': 'Published as a conference paper at ICLR 2025 Figure 3: A multi-stage schedule (Appendix A.2) example to illustrate the learning rate (LR) sum matching ', 'modified_lines': '(Section 3.1) and fine-grained loss reduction decomposition (Section 3.2.2). The steps with equal LR sum as the final step T9 = 8720 are marked and linked with the dash-point line. Each stage spans 90 steps. T1 = 8000, T2 = 8090, t(1) = ZT2 (T9), t(2) = ZT3 (T9). See Appendix F.3 for experiment details. Left: The actual multi-stage schedule and schedules for auxiliary processes. LR gap between adjacent points denotes the LR reduction ∆η(i) = η(i−1) − η(i). Right: Corresponding training curves for the multi-stage schedule and the auxiliary processes. The total loss reduction is LD(T9) and can be decomposed as the intermediate loss {η1, . . . , ηT }. Our construction is detailed as follows. We define the k-th auxiliary process shares the first k steps of learning rates, {η1, . . . , ηk}, with the actual training process with LR schedule E, and continues with the constant LR ηk afterwards. The corresponding loss curve for the k- th auxiliary process is denoted as Lk(t). In particular, the 0-th auxiliary process shares only the warmup phase with the actual training process and uses a constant LR η0 = ηmax after warmup. We especially call it the constant process and use Lconst(t) to represent its loss curve. The T -th auxiliary process coincides with the actual training run with the target LR schedule, so LT (t) = L(t). ', 'original_lines': '(Section 3.1) and fine-grained loss reduction decomposition (Section 3.2.2). The step points with the equal LR sum as the final step T9 = 8720 are marked and linked with the dash-point line. Each stage spans 90 steps. T1 = 8000, T2 = 8090, t(1) = ZT2 (T9), t(2) = ZT3 (T9). See Appendix G.3 for experiment details. Left: The actual multi-stage schedule and schedules for auxiliary processes. LR gap between adjacent points denotes the LR reduction ∆η(i) = η(i−1) − η(i). Right: Corresponding training curves for the multi-stage schedule and the auxiliary processes. The total loss reduction is LD(T9) and can be decomposed as the intermediate loss {η1, . . . , ηT }. Our construction is detailed as follows. We define the l-th auxiliary process shares the first l steps of learning rates, {η1, . . . , ηl}, with the actual training process with LR schedule E, and continues with the constant LR ηl afterwards. The corresponding loss curve for the l-th auxiliary process is denoted as Ll(t). In particular, the 0-th auxiliary process shares only the warmup phase with the actual training process and uses a constant LR η0 = ηmax after warmup. We especially call it the constant process and use Lconst(t) to represent its loss curve. The T -th auxiliary process coincides with the actual training run with the target LR schedule, so LT (t) = L(t). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 EMPIRICAL DERIVATION OF THE MULTI-POWER LAW', 'after_section': None, 'context_after': '5 80008180836085408720Step3.02.52.01.51.00.5Learning Rate (x104)(0)(1)(2)(3)(4)(5)(6)(7)(8)(2)t(1)t(2)Equal LR Sum80008180836085408720Step3.443.463.483.503.523.54LossLD(8720)LD(2)(t(2))t(1)t(2)Equal LR Sum0.30.40.50.81.01.52.02.53.0Learning Rate (x104) Published as a conference paper at ICLR 2025 corresponds to evolving θ(τ ) over a small time interval of length ηt. When the learning rates are sufficiently small, the parameters after t steps of SGD are close to θ(τ ) at time τ = (cid:80)t k=1 ηk. ', 'paragraph_idx': 21, 'before_section': '3 EMPIRICAL DERIVATION OF THE MULTI-POWER LAW', 'context_before': 'Elkabetz & Cohen, 2021). Here gradient flow describes a continuous-time process in which the parameters θ(τ ) evolve according to the differential equation dθ(τ ) dτ = −∇L(θ(τ )), where ∇L(θ) ', 'modified_lines': 'is the gradient at θ, and τ denotes the continuous time. In this approximation, the t-th step of SGD ', 'original_lines': 'is the gradient at θ, and τ denotes the continuous time. In this approximation, the t-th step of SGD ', 'after_paragraph_idx': None, 'before_paragraph_idx': 21}, {'section': '3 EMPIRICAL DERIVATION OF THE MULTI-POWER LAW', 'after_section': '3 EMPIRICAL DERIVATION OF THE MULTI-POWER LAW', 'context_after': '3.2.2 LOSS REDUCTION APPROXIMATION ', 'paragraph_idx': 20, 'before_section': None, 'context_before': '(5) where A is a parameter counterpart of ˜A. We perform extensive empirical validation and ablation studies across different model sizes, training horizons, and learning rates to confirm the robustness ', 'modified_lines': 'of (5), as detailed in Appendix F.1 and illustrated in Figure 11. ', 'original_lines': 'of (5), as detailed in Appendix G.1 and illustrated in Figure 11. ', 'after_paragraph_idx': 20, 'before_paragraph_idx': None}, {'section': '3 EMPIRICAL DERIVATION OF THE MULTI-POWER LAW', 'after_section': None, 'context_after': '(6) where B is a constant. This finding highlights a strong correlation between the loss gap and the LR gap at equivalent LR sum points on the loss landscape. However, while the linear approximation ', 'paragraph_idx': 22, 'before_section': '3 EMPIRICAL DERIVATION OF THE MULTI-POWER LAW', 'context_before': 'approximation as a warmup, then we further break down the term with a finer-grained LR sum matching approach. ', 'modified_lines': 'Warmup: A Crude Linear Approximation. We first generate training loss curves across var- ious LR schedule types, including cosine and WSD schedules, alongside the loss curves of their corresponding constant processes. Then we can compute the loss reduction LD(t) for different LR schedules and analyze their dependency. As demonstrated in Figure 10, LD(t) is approximately pro- portional to the LR reduction, ∆ηt = η0 − ηt across different schedules. This leads to the following approximation: LD(t) ≈ B(η0 − ηt), ', 'original_lines': 'A Crude Linear Approximation. We first generate training loss curves across various LR sched- ule types, including cosine and WSD schedules, alongside the loss curves of their corresponding constant processes. Then we can compute the loss reduction LD(t) for different LR schedules and analyze their dependency. As demonstrated in Figure 10, LD(t) is approximately proportional to the LR reduction, ∆ηk = η0 − ηk across different schedules. This leads to the following approximation: LD(t) ≈ B(η0 − ηk), ', 'after_paragraph_idx': None, 'before_paragraph_idx': 22}, {'section': '3 EMPIRICAL DERIVATION OF THE MULTI-POWER LAW', 'after_section': '3 EMPIRICAL DERIVATION OF THE MULTI-POWER LAW', 'context_after': 'Fine-Grained LR Sum Matching Decomposition. 1 ηk where Sk(t) = (cid:80)t (9) (10) t−1 (cid:88) k=0 (12) k=1 4 EMPIRICAL VALIDATION OF THE MULTI-POWER LAW The Multi-Power Law (MPL) comes from our speculations based on our experiments with special types of LR schedules. Now we present extensive experiments to validate the law for common LR schedules used in practice. Our experiments demonstrate that MPL requires only two or three LR schedules and their corresponding loss curves in the training set to fit the law. The fitted MPL can then predict loss curves for test schedules with different shapes and extended horizons. 4.1 RESULTS Generalization to Non-monotonic Schedules. MPL extends effectively to complex non- 4.2 COMPARISON WITH BASELINES Comparison with Chinchilla Law. While Chinchilla-style data scaling laws, which we abbreviate as CDSLs, are widely adopted (Muennighoff et al., 2023; Hoffmann et al., 2022), MPL offers several distinct advantages: (1) MPL incorporates LR dependency, unlike CDSLs, and (2) MPL predicts the entire loss curve, whereas CDSLs are limited to estimating only the final loss. These advantages enable MPL to achieve higher sample efficiency than CDSLs. Notably, we demonstrate that a single constant and cosine schedule curve suffices to fit MPL with strong generalization. As illustrated in Figure 5(a), MPL reduces final loss prediction to less than 1/3 that of CDSLs while requiring about 1/5 compute budget. Furthermore, MPL excels in fitting the open-source 7B OLMo (Groeneveld et al., 2024), as shown in Figure 5(b). Additional details of the comparison with Chinchilla Law are Comparison with Momentum Law. The MPL outperforms the recently proposed Momentum schedules. While MTL incorporates LR annealing effects by modeling loss reduction through the momentum of LR decay, it indicates an exponential loss reduction for two-stage LR schedules, in- consistent with our observations (see Appendix A.1). Across the diverse schedules in the test set, ', 'paragraph_idx': 23, 'before_section': None, 'context_before': 'the stage switch, whereas the true loss decline remains smoother during the training process. See Appendix C for further discussion. ', 'modified_lines': 'In practice, the loss reduction term LD(t) can have a more complex dependency on the LR schedule. To provide a more accurate approxima- tion than the linear approximation above, we employ LR sum matching between adjacent auxiliary processes and decompose the loss reduction LD(t) into a telescoping sum of intermediate loss re- ductions between adjacent auxiliary processes. More specifically, consider the step t in the actual training process. Similar to Z(t), we define tk := Zk(t) as the equal-LR-sum step in the k-th auxiliary process, which is given by τ =k ητ . Then, for the k-th and (k + 1)-th processes, we define the intermediate tk := Zk(t) := k − 1 + Sk(t), (7) loss reduction as: LDk(tk+1) := Lk(tk) − Lk+1(tk+1). (8) Intuitively, this term compares the loss at step tk+1 in the (k+1)-th process with the loss at the equal- LR-sum step in a process that stops decaying the LR after the first k steps, i.e., the k-th process. We then decompose the loss reduction term as a telescoping sum of intermediate loss reductions: LD(t) = Lconst(Z(t)) − L(t) = L0(Z0(t)) − Lt(Zt(t)) = t−1 (cid:88) k=0 LDk(tk+1). 6 Published as a conference paper at ICLR 2025 (a) Cyclic Schedule (b) Random-Polyline Schedule Figure 4: The examples of long-horizon non-monotonic schedules. The one-power line represents the constant process prediction. (a) The cyclic schedule with 72000 steps, where each half-cycle spans 8000 steps, and (b) The random-polyline schedule, consisting of piecewise linear the first decay begins after 16000 steps. interpolation between randomly selected intermediate learning rates in the range of 3 × 10−5 to 3 × 10−4, with LR milestones occurring at intervals of 8000 steps. By leveraging this fine-grained decomposition, a good estimation of LDk(tk+1) can lead to a more accurate approximation of LD(t). Where the context is clear, we simplify notation by omitting subscripts and denoting intermediate loss reduction as LDk(t). 3.3 BOTTOM-UP DERIVATION: TWO-STAGE, MULTI-STAGE, AND GENERAL SCHEDULES The challenges in approximating the intermediate loss reduction LDk(t) are twofold. First, for commonly used schedules, the learning rate (LR) reduction at intermediate steps is often too small to induce a measurable loss reduction. Second, LDk(t) may depend intricately on all previous learning rates {η1, . . . , ηk}, which we refer to as the LR prefix in this section. To address these issues, we derive the form of LDk(t) using a “bottom-up” approach regarding schedule structures. First, we propose its form through schedules comprising two constant LR stages, leveraging significant LR reductions. Next, we examine its dependency on the LR prefix using schedules of multiple stages. Our main finding is that LDk(t) depends weakly on the LR prefix and can be approximated by the following form: LDk(t) ≈ (cid:99)LDk(t) := B(ηk − ηk+1) 1 − (cid:18) (cid:16) Cη1−γ k+1 (t − k) + 1 (cid:17)−β(cid:19) , with LR-prefix independent constants B, C, γ and β. Due to space constraints, we refer readers to Appendices A.1 and A.2 for detailed derivations of (10). For general LR schedules, we extrapolate this findings and propose to approximate the total loss reduction term as: t−1 (cid:88) (cid:99)LD(t) := (cid:99)LDk(tk+1) = B(ηk − ηk+1) 1 − (cid:18) (cid:16) Cη1−γ k+1 (tk+1 − k) + 1 (cid:17)−β(cid:19) k=0 By the definition of tk+1 (7), we have tk+1 − k = Sk+1(t) ηk+1 B(ηk−1 − ηk) (cid:0)1 − (Cη−γ LD(t) ≈ (cid:99)LD(t) = t (cid:88) k Sk(t) + 1)−β(cid:1) , . Therefore, we can conclude k=1 where we also change the subscript indices from k + 1 to k. Combining the above ansatz for the loss reduction term with the power-law ansatz for the auxiliary loss in (5) leads to our Multi-Power Law: L(t) ≈ L0 + A · (S1(t) + SW )−α − t (cid:88) B(ηk−1 − ηk) (cid:0)1 − (Cη−γ k Sk(t) + 1)−β(cid:1) . See Appendix C for the ablation studies on different components of the Multi-Power Law. 7 . (11) 10000200003000040000500006000070000Step3.23.33.43.5Loss60000625006500067500700003.203.223.243.263.283.30Learning RateLossMulti-powerOne-power0.00.51.01.52.02.53.0Learning Rate (x104)10000200003000040000500006000070000Step3.23.33.43.5Loss60000625006500067500700003.203.223.243.263.28Learning RateLossMulti-powerOne-power0.51.01.52.02.53.0Learning Rate (x104) Published as a conference paper at ICLR 2025 Table 1: Evaluation metrics for the Momentum Law and Multi-Power Law on predicting the loss curves of 25M, 100M, and 400M models with unseen schedules. R2, MAE, RMSE, PredE, and WorstE are the coefficient of determination, Mean Absolute Error, Root Mean Square Error, Prediction Error, and Worst-case Error, respectively. Model Size R2 ↑ MAE ↓ RMSE ↓ PredE ↓ WorstE ↓ Method 25M 100M 400M Momentum Law Multi-Power Law (Ours) Momentum Law Multi-Power Law (Ours) Momentum Law Multi-Power Law (Ours) 0.9904 0.9975 0.9959 0.9982 0.9962 0.9971 0.0047 0.0039 0.0068 0.0038 0.0071 0.0053 0.0060 0.0046 0.0095 0.0051 0.0094 0.0070 0.0014 0.0012 0.0022 0.0013 0.0025 0.0019 0.0047 0.0040 0.0094 0.0058 0.0100 0.0070 (a) Fitting Sample Efficiency Comparison (b) Whole Curve Fitting Comparison Figure 5: (a) Target loss predictions at 128000-step for a cosine schedule using MPL and CDSL fitting, with a 400M model. CDSL fitting requires six cosine losses (Loss Curve(C)) from 14,960 steps to 72000 steps but relies solely on their final losses (Loss Ends(C)). In contrast, MPL leverages the entire 24000-step constant and cosine loss curves (Loss Curves(M)). Final loss predictions are denoted as Pred(C) for CDSL and Pred(M) for MPL respectively. (b) Comparison of MPL and CDSL fittings on the whole loss curve of the open-source 7B OLMo model, trained with a linear schedule. Generalization to Unseen LR Schedules. MPL accurately predicts loss curves for LR schedules outside the training set. As illustrated in Figure 2 and Table 1, despite the absence of WSD schedules in the training set and the variety of decay functions, MPL successfully predicts their loss curves with high accuracy. Furthermore, MPL generalizes to two-stage schedules with different ηB values from the training set, effectively extrapolating curves for both continuous and discontinuous cases. Generalization to Longer Horizons. MPL demonstrates the ability to extrapolate loss curves for horizons exceeding three times the training set length. In our runs, the training set contains approximately 22000 post-warmup steps, while the test set includes curves with up to 70000 post- warmup steps. These results validate MPL’s capability to generalize to longer horizons. Notably, the data-to-model ratio for a 25M-parameter model trained over 72000 steps (36B tokens) is comparable to Llama2 pretraining (70B model, 2T tokens), consistent with trends favoring higher data volumes for fixed model sizes (Dubey et al., 2024). monotonic schedules, although derived for monotonic decay schedules. We test the fitted MPL over challenging cases such as cyclic schedules and the random-polyline schedule, where LR values are randomly selected at every 8000 steps and connected by a polyline. These experiments, conducted on a 25M-parameter model over 72000 steps, also represent a demanding long-horizon scenario. As shown in Figure 4, MPL accurately predicts these long-horizon non-monotonic schedules. 8 024681012Step (x104)2.62.83.03.2Loss12.5012.7513.002.5502.575Loss Curves(C)Loss Curves(M)Loss Ends(C)Loss Ends(M)Pred(C)Pred(M)Loss Curve(Test)Target Loss012345Step (x105)2.02.12.22.32.42.5LossLossMulti-powerChinchilla0.51.01.52.02.53.0Learning Rate (x104)Linear Schedule Published as a conference paper at ICLR 2025 Table 2: Downstream performance comparison for the cosine and our optimized schedules. Percent- age changes (↑ or ↓) indicate relative improvements or regressions compared to the cosine schedule. Schedule LAMBADA HellaSwag PIQA ARC-E C3 RTE Cosine Optimized 46.54 48.71 (↑ 2.17%) 37.12 37.74 (↑ 0.62%) 65.13 65.07 (↓ 0.06%) 43.56 44.09 (↑ 0.53%) 48.44 50.30 (↑ 1.86%) 52.71 53.79 (↑ 1.08%) provided in Appendix G.2. Law(MTL) (Tissue et al., 2024) in both accuracy and applicability to discontinuous learning rate ', 'original_lines': 'In practice, the loss reduction term LD(t) can have a more complex dependency on the LR schedule. To provide a more accurate approximation than the linear approximation above, we employ LR sum matching between adjacent auxiliary pro- cesses and decompose the loss reduction LD(t) into the sum of intermediate loss reductions between adjacent auxiliary processes. We define the intermediate loss reduction between adjacent auxiliary processes as: LDk(tk+1) := Lk(Ak(tk+1)) − Lk+1(tk+1). (7) Lk and Lk+1 are the loss curves for k-th and (k + 1)-th auxiliary processes respectively. Ak(tk+1) denotes the step in k-th auxiliary process, which has the same LR sum as step tk+1 in the (k + 1)-th auxiliary process. Then we consider the steps in all the auxiliary processes sharing the LR sum with step t in the actual training. Thus, analogous to Z(t), the equal-LR-sum step in k-th auxiliary process can be computed through Zk(t) = k − 1 + Sk(t), (8) the same LR sum in the adjacent auxiliary processes, so we obtain τ =k ητ , representing the cumulative LR sum. Clearly, Zk(t) and Zk+1(t) have Consequently, we can derive from (7) and (9): LDk(Zk+1(t)) = Lk(Zk(t)) − Lk+1(Zk+1(t)). Ak(Zk+1(t)) = Zk(t). 6 Published as a conference paper at ICLR 2025 (a) Cyclic Schedule (b) Random-Polyline Schedule Figure 4: The examples of long-horizon non-monotonic schedules. The one-power line represents the constant process prediction. (a) The cyclic schedule with 72,000 steps, where each half-cycle spans 8,000 steps, and the first decay begins after 16,000 steps. (b) The random-polyline schedule, consisting of piecewise linear interpolation between randomly selected intermediate learning rates in the range of 3 × 10−5 to 3 × 10−4, with LR milestones occurring at intervals of 8,000 steps. Finally, the total loss reduction can be decomposed as the sum of intermediate loss reductions: LD(t) = Lconst(Z(t)) − L(t) = L0(Z0(t)) − Lt(Zt(t)) = LDk(Zk+1(t)). (11) Here, Z0(t) = Z(t), ensuring that L0(Z0(t)) = Lconst(Z(t)). By leveraging this fine-grained decomposition, a refined estimation of LDk(tk+1) enables a more precise approximation of LD(t). Where the context is clear, we simplify notation by omitting sub- scripts and denoting intermediate loss reduction as LDk(t). 3.3 BOTTOM-UP DERIVATION: TWO-STAGE, MULTI-STAGE, AND GENERAL SCHEDULES The challenges in approximating the intermediate loss reduction LDk(t) are twofold. First, for commonly used schedules, the learning rate (LR) reduction at intermediate steps is often too small to induce a measurable loss reduction. Second, LDk(t) may depend intricately on all previous learning rates {η1, . . . , ηk}, which we refer to as the LR prefix in this section. To address these issues, we derive the form of LDk(t) using a “bottom-up” approach regarding schedule structures. Initially, we propose its form through schedules comprising two constant LR stages, leveraging significant LR reductions. Next, we examine its dependency on the LR prefix using schedules of multiple stages. Finally, we generalize the form to encompass all schedules and conclude with a multi-power law. The discussion of two-stage and multi-stage schedule is detailed in Appendices A.1 and A.2. For general LR schedules, we extrapolate our findings from the two-stage and multi-stage cases in Appendices A.1 and A.2, and propose to approximate the intermediate loss reduction at step k as the following power form: (cid:18) (cid:16) Cη1−γ k+1 (t − k) + 1 (cid:17)−β(cid:19) , LDk(t) ≈ (cid:99)LDk(t) := B(ηk − ηk+1) 1 − with LR-prefix independent constants B, C, γ and β. Thus, the loss reduction between the constant process and the actual process can be approximated as t−1 (cid:88) (cid:99)LD(t) := (cid:99)LDk(Zk+1(t)) = t−1 (cid:88) B(ηk − ηk+1) (cid:16) 1 − (Cη1−γ k+1 (Zk+1(t) − k) + 1)−β(cid:17) . k=0 k=0 By the definition of Zk(t), we have Zk+1(t) − k = Sk+1(t) ηk+1 . Therefore, we can conclude LD(t) ≈ (cid:99)LD(t) = t (cid:88) B (cid:0)ηk−1 − ηk)(1 − (Cη−γ k Sk(t) + 1)−β(cid:1) , (13) where we also change the subscript indices from k + 1 to k. Combining the above ansatz for the loss reduction term with the power-law ansatz for the auxiliary loss in Equation (5) leads to our multi-power law: L(t) ≈ L0 + A · (S1(t) + SW )−α − t (cid:88) k=1 B(ηk−1 − ηk)(1 − (Cη−γ k Sk(t) + 1)−β). (14) 7 10000200003000040000500006000070000Step3.23.33.43.5Loss60000625006500067500700003.203.223.243.263.283.30Learning RateLossMulti-powerOne-power0.00.51.01.52.02.53.0Learning Rate (x104)10000200003000040000500006000070000Step3.23.33.43.5Loss60000625006500067500700003.203.223.243.263.28Learning RateLossMulti-powerOne-power0.51.01.52.02.53.0Learning Rate (x104) Published as a conference paper at ICLR 2025 (a) Fitting Sample Efficiency Comparison (b) Whole Curve Fitting Comparison Figure 5: (a) Target loss predictions at 128,000-step for a cosine schedule using MPL and CDSL fitting, with a 400M model. CDSL fitting requires six cosine losses (Loss Curve(C)) from 14,960 steps to 72,000 steps but relies solely on their final losses (Loss Ends(C)). In contrast, MPL leverages the entire 24,000-step constant and cosine loss curves (Loss Curves(M)). Final loss predictions are denoted as Pred(C) for CDSL and Pred(M) for MPL respectively. (b) Comparison of MPL and CDSL fittings on the whole loss curve of the open-source 7B OLMo model, trained with a linear schedule. See Appendix C for the discussion about the simplification of the multi-power law. Generalization to Unseen LR Schedules. MPL can accurately predict loss curves for LR sched- ules outside the training set. As illustrated in Figure 2, despite the absence of WSD schedules in the training set and the variety of decay functions, MPL successfully predicts their loss curves with high accuracy. Furthermore, MPL can generalize to two-stage schedules with different ηB values from the training set, effectively extrapolating curves for both continuous and discontinuous cases. Generalization to Longer Horizons. MPL demonstrates the ability to extrapolate loss curves for horizons exceeding three times the training set length. In our runs, the training set contains approxi- mately 22,000 post-warmup steps, while the test set includes curves with up to 70,000 post-warmup steps. These results validate MPL’s capability to generalize to longer horizons. Notably, the data- to-model ratio for a 25M-parameter model trained over 72,000 steps (36B tokens) is comparable to Llama2 pretraining (70B model, 2T tokens), consistent with trends favoring higher data volumes for fixed model sizes (Dubey et al., 2024). monotonic schedules, although derived for monotonic decay schedules. We test the fitted MPL over challenging cases such as cyclic schedules and the random-polyline schedule, where LR val- ues are randomly selected at every 8,000 steps and connected by a polyline. These experiments, conducted on a 25M-parameter model over 72,000 steps, also represent a demanding long-horizon scenario. As shown in Figure 4, MPL accurately predicts these long-horizon non-monotonic sched- ules, demonstrating its robustness and adaptability. 8 024681012Step (x104)2.62.83.03.2Loss12.5012.7513.002.5502.575Loss Curves(C)Loss Curves(M)Loss Ends(C)Loss Ends(M)Pred(C)Pred(M)Loss Curve(Test)Target Loss012345Step (x105)2.02.12.22.32.42.5LossLossMulti-powerChinchilla0.51.01.52.02.53.0Learning Rate (x104)Linear Schedule Published as a conference paper at ICLR 2025 Table 1: Model performance comparison. R2, MAE, RMSE, PredE, and WorstE are the coefficient of determination, Mean Absolute Error, Root Mean Square Error, Prediction Error, and Worst-case Error, respectively. Model Size Method R2 ↑ MAE ↓ RMSE ↓ PredE ↓ WorstE ↓ 25M 100M 400M Momentum Law Multi-Power Law (Ours) Momentum Law Multi-Power Law (Ours) Momentum Law Multi-Power Law (Ours) 0.9904 0.9975 0.9959 0.9982 0.9962 0.9971 0.0047 0.0039 0.0068 0.0038 0.0071 0.0053 0.0060 0.0046 0.0095 0.0051 0.0094 0.0070 0.0014 0.0012 0.0022 0.0013 0.0025 0.0019 0.0047 0.0040 0.0094 0.0058 0.0100 0.0070 Table 2: Downstream performance comparison for the cosine and our optimized schedules. Percent- age changes (↑ or ↓) indicate relative improvements or regressions compared to the cosine schedule. Schedule LAMBADA HellaSwag PIQA ARC-E C3 RTE Cosine Optimized 46.54 48.71 (↑ 2.17%) 37.12 37.74 (↑ 0.62%) 65.13 65.07 (↓ 0.06%) 43.56 44.09 (↑ 0.53%) 48.44 50.30 (↑ 1.86%) 52.71 53.79 (↑ 1.08%) provided in Appendix H.2. Law(MTL) (Tissue et al., 2024)1 in both accuracy and applicability to discontinuous learning rate ', 'after_paragraph_idx': 23, 'before_paragraph_idx': None}, {'section': '7 .', 'after_section': None, 'context_after': '5 THE MULTI-POWER LAW INDUCES BETTER LR SCHEDULES ', 'paragraph_idx': 34, 'before_section': '7 .', 'context_before': 'marized in Table 1. Additionally, for WSD schedules with linear LR decay, MPL more accurately captures the loss reduction trend during the decay stage, as highlighted in Figure 14(b), compared to MTL. Further details on MTL and its relationship to MPL can be found in Appendix C, with fitting ', 'modified_lines': 'specifcs provided in Appendix G.2. ', 'original_lines': 'specifcs provided in Appendix H.2. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 34}, {'section': '5 THE MULTI-POWER LAW INDUCES BETTER LR SCHEDULES', 'after_section': '5 THE MULTI-POWER LAW INDUCES BETTER LR SCHEDULES', 'context_after': 'to monotonicity constraints: E 9 ', 'paragraph_idx': 36, 'before_section': '5 THE MULTI-POWER LAW INDUCES BETTER LR SCHEDULES', 'context_before': 'T -dimensional vector E = (η1, . . . , ηT ), with the final loss denoted as L(E) under given hyper- parameters. Our goal is to find the optimal LR schedule E∗ = arg minE L(E). Using MPL, we parameterize the predicted final loss as LΘ(E) with parameters Θ = {L0, A, B, C, α, β, γ}, esti- ', 'modified_lines': 'mated as outlined in Section 4. We approximate E∗ by optimizing the surrogate loss LΘ(E) subject min s.t. 0 ≤ ηt ≤ ηt−1, ∀ 1 ≤ t ≤ T. LΘ(E) (13) This optimization induces an “optimal” schedule under the MPL approximation. In practice, we set the peak LR η0 = 3 × 10−4 We view E as a high-dimensional vector and optimize it using the Adam optimizer. Further details are provided in Appendix H. Results for a 400M model are shown in Figure 1, with additional experiments for 25M and 100M models in Figure 18. 5.2 RESULTS Optimized LR Schedule Exhibits Stable-Decay Pattern. Our optimized LR schedule follows a Warmup-Stable-Decay (WSD) pattern, comprising two main post-warmup phases: a stable phase with a constant peak LR, and a decay phase ending with a lower LR, as illustrated in Figures 1 and 18. By contrast, the momentum law (Tissue et al., 2024) theoretically yields a collapsed learning rate schedule, which we will prove in Appendix I. Optimized LR Schedule Outperforms Cosine Schedules. Across comparison experiments of different model sizes and training steps, our optimized schedules consistently outperform the co- ', 'original_lines': 'mated as outlined in Section 4. We approximate E∗ by optimizing the surrogate loss L ˆΘ(E) subject ˆE = min L ˆΘ(E) s.t. 0 ≤ ηt ≤ ηt−1, ∀ 1 ≤ t ≤ T. (15) This optimization induces an “optimal” schedule ˆE derived from MPL with parameter ˆΘ. We set the peak LR η0 = 3 × 10−4 and assume ηt is monotonically non-increasing, reflecting established training practices. We view E as a high-dimensional vector and optimize it using the Adam opti- mizer. Further details are provided in Appendix I. Results for a 400M model are shown in Figure 1, with additional experiments for 25M and 100M models in Figure 18. 1Concurrent work. Early versions of our work are available at https://openreview.net/pdf?id= KnoS9XxIlK(October 2024). ', 'after_paragraph_idx': 36, 'before_paragraph_idx': 36}, {'section': '5 THE MULTI-POWER LAW INDUCES BETTER LR SCHEDULES', 'after_section': '5 THE MULTI-POWER LAW INDUCES BETTER LR SCHEDULES', 'context_after': 'sine schedules, achieving a margin exceeding 0.02. Notably, no WSD-like schedule is present in the training set, highlighting MPL’s extrapolation capability. Figure 19 extends this comparison to longer training horizons and Figure 6(b) validates the superiority for 1B model. we further validate the effectiveness of our optimized schedules by evaluating the downstream task performance. As shown in Table 2, our optimized schedule leads to overall improvements in downstream tasks against the cosine schedules, showing practical gains from loss improvements. Ablation details for longer 6 CONCLUSIONS This paper proposes the Multi-Power Law (MPL) to capture the relationship between loss and LR 10 0246Step (x104)2.53.03.54.0Loss25M100M400M1BConst lossConst predCosine lossCosine pred0246Step (x104)2.42.62.83.03.2Loss672.3752.4002.4250.00.51.01.52.0Learning Rate (x104)CosineOpt(Ours)WSDSC(Ours)LossLRCosineOpt(Ours)WSDSC(Ours)LossLR Published as a conference paper at ICLR 2025 REFERENCES Alexander Atanasov, Jacob A Zavatone-Veth, and Cengiz Pehlevan. Scaling and renormalization in ', 'paragraph_idx': 37, 'before_section': None, 'context_before': 'ule (Opt), cosine schedule (Cosine), and simplified optimized schedule (WSDSC, see Section 5.2), featuring a WSD schedule with sqrt-cube decay. ', 'modified_lines': 'horizons and larger models are in Appendix H. Optimized LR Schedule Outperforms Tuned WSD Variants. Our optimized schedules lead to smaller final loss than the WSD and WSDLD schedules proposed in Hu et al. (2024). For a 400M model, we find that the decay step of a 24000-step optimized schedule (Figure 1) is close to the optimally tuned step (∼6000) for these WSD schedules, determined via grid search over {3000, 4000, 5000, 6000, 7000}. However, even when the decay ratios of WSD and WSDLD schedules are optimally tuned, our optimized schedule still outperforms them. Further, we analyze key differences between our optimized schedule and these two WSD schedules as follows. The optimized schedule decays to below 1/20 of the peak LR, even approaching to zero, while WSD schedules decay linearly or exponentially to 1/10 of the peak LR. However, simply adjusting the ending LR to near-zero (Appendix H) does not close the gap. Another key difference is the decay function: through symbolic regression, we find that in the decay phase, the optimized schedule roughly follows a power decay function rather than linear or exponential decay: ηt ≈ ηmax · (1 − τ )1.5, where τ is the step number in the decay phase, normalized to [0, 1]. Motivated by this, we propose a WSD variant with sqrt-cube decay (WSDSC), which decays the LR exactly as ηt = ηmax · (1 − τ )1.5. WSDSC is effective across various model sizes and architectures and outperforms the WSD schedule, as evidenced in Figures 6(b) and 15(a), though it still falls behind our optimized schedule. See Appendix H for details. schedule. The fitted MPL accurately predicts the entire loss curve while requiring much fewer training runs compared to existing scaling laws. Furthermore, our MPL is accurate enough to be used for optimizing schedules, and we extensively validate the superiority of our optimized schedules over commonly used ones. However, we do observe slight deviations between our predictions and actual training curves, especially for long-horizon and high peak LR cases like in Figures 13 and 16. likely due to several simplifications in our derivation: (1) the coefficient β remains constant across different LR scales; (2) the intermediate loss reduction does not depend on the LR prefix; (3) variations in LR during the warm-up phase are ignored. In future work, we aim to (1) further explore the theoretical foundation of our MPL to uncover its underlying mechanisms; (2) investigate empirical laws for schedule-aware loss curve prediction with varying peak LRs and other hyperparameters; and (3) refine our MPL to further enhance its prediction accuracy and generalizability. 7 ACKNOWLEDGMENT We would like to thank Kaiyue Wen, Huanqi Cao, and all anonymous reviewers, for their insightful comments and feedback. We also thank Hongzhi Zang for improving figure readability. This work is supported by the National Natural Science Foundation of China under Grant Number U20B2044. Steven Adriaensen, Herilalaina Rakotoarison, Samuel M¨uller, and Frank Hutter. Efficient bayesian learning curve extrapolation using prior-data fitted networks. Advances in Neural Information Processing Systems, 36:19858–19886, 2023. Ibrahim M Alabdulmohsin, Behnam Neyshabur, and Xiaohua Zhai. Revisiting neural scaling laws in language and vision. Advances in Neural Information Processing Systems, 35:22300–22312, 2022. ', 'original_lines': '5.2 RESULTS Optimized LR Schedule Exhibits Stable-Decay Pattern. The optimized LR schedule follows a Warmup-Stable-Decay (WSD) structure, comprising two main post-warmup phases: a stable phase with a constant peak LR, and a decay phase ending with a lower LR, as illustrated in Figures 1 and 18. By contrast, the momentum law (Tissue et al., 2024) theoretically yields a collapsed learning rate schedule, as proved in Appendix J. However, unlike traditional WSD schedules (Hu et al., 2024), which decays linearly or exponentially to 1/10 of the peak LR, our optimized schedule reaches lower ending learning rates, typically below 1/20 of the peak, even close to zero. Using normalized steps ˜t and normalized learning rates ˜ηavg, We find that the decay function of the optimized schedule roughly follows ˜ηavg = (1 − ˜t)1.5, capturing the near-zero ending LR (˜t = 1, ˜ηavg = 0). Optimized LR Schedule Outperforms Cosine Schedules. Across comparison experiments of different model sizes and training steps, our optimized schedules consistently outperform the co- horizons and larger models are in Appendix I. Optimized LR Schedule Outperforms Tuned WSD Variants. For a 400M model, the decay step of a 24000-step optimized schedule (Figure 1) is close to the optimally tuned step ( 6,000) for WSD and WSDLD schedules, determined via grid search over {3,000, 4,000, 5,000, 6,000, 7,000}. However, it surpasses these decay-ratio-tuned variants, suggesting that tuning the decay ratio alone is insufficient. Adjusting the ending LR to near-zero (see Appendix I) or altering the decay function also falls short. We propose a WSD variant with sqrt-cube decay (WSDSC), whose decay function is ˜ηavg = (1 − ˜t)1.5. WSDSC is effective across various model sizes and architectures, as evidenced in Figures 6(b) and 15(a), offering an alternative decay function for WSD schedules. Yet, it still falls short of the optimized schedule (Figure 6(b)), possibly due to untuned decay ratios. See Appendix I for more details. schedule, derived bottom-up from stage-wise schedules using LR sum matching decomposition and. The fitted MPL can accurately predict the entire loss curve while reducing the computational cost of fitting compared to traditional scaling laws. Through a theoretical analysis of a quadratic loss function, We discuss the possible underlying mechanism for MPL. Furthermore, we get optimized schedules via minimizing the predicted final loss of MPL, and extensively validate their superiority over commonly used schedules, thereby improving training efficiency. Armen Aghajanyan, Lili Yu, Alexis Conneau, Wei-Ning Hsu, Karen Hambardzumyan, Susan Zhang, Stephen Roller, Naman Goyal, Omer Levy, and Luke Zettlemoyer. Scaling laws for gen- In International Conference on Machine Learning, pp. erative mixed-modal language models. 265–279. PMLR, 2023. ', 'after_paragraph_idx': 37, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Denis Paperno, Germ´an Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fern´andez. The lambada dataset: Word prediction requiring a broad discourse context. arXiv preprint arXiv:1606.06031, 2016. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Yuling Gu, Shengyi Huang, Matt Jordan, et al. 2 olmo 2 furious. arXiv preprint arXiv:2501.00656, 2024. ', 'modified_lines': '', 'original_lines': 'Rui Pan, Haishan Ye, and Tong Zhang. Eigencurve: Optimal learning rate schedule for sgd on quadratic objectives with skewed hessian spectrums. arXiv preprint arXiv:2110.14109, 2021. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Stefano Spigler, Mario Geiger, and Matthieu Wyart. Asymptotic learning curves of kernel methods: empirical data versus teacher–student paradigm. Journal of Statistical Mechanics: Theory and Experiment, 2020(12):124001, 2020. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'on applications of computer vision (WACV), pp. 464–472. IEEE, 2017. ', 'modified_lines': '', 'original_lines': 'Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. Advances in neural information processing systems, 25, 2012. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 EMPIRICAL DERIVATION OF THE MULTI-POWER LAW', 'after_section': '3 EMPIRICAL DERIVATION OF THE MULTI-POWER LAW', 'context_after': '∆η(i) = η(i−1) −η(i), is also measurable. Using this, we estimate the shape of LD(i)(t) for different stages. Regard Ti as TA in the two-stage case and define x := t − Ti. As shown in Figure 9(a), (Ti + x) := ˜B(i) (cid:16) ˜C (i) · η(i)x + 1 ', 'paragraph_idx': 19, 'before_section': None, 'context_before': 'In the multi-stage schedule, given stage index 1 ≤ i ≤ n, the Stage-Wise Loss Reduction. ', 'modified_lines': 'stage-wise loss reduction is defined as LD(i)(t) = LDTi(t)1. The LR reduction between stages, LD(i)(Ti + x) approximately conforms to a similar power law as (14) for the two-stage case: ', 'original_lines': 'stage-wise loss reduction is defined as LD(i)(t) = LDTi(t)2. The LR reduction between stages, LD(i)(Ti + x) approximately conforms to a similar power law as (16) for the two-stage case: ', 'after_paragraph_idx': 19, 'before_paragraph_idx': None}, {'section': '3 EMPIRICAL DERIVATION OF THE MULTI-POWER LAW', 'after_section': None, 'context_after': 'stage coincide. ', 'paragraph_idx': 17, 'before_section': None, 'context_before': 'B HOW MIGHT THE MULTI-POWER LAW ARISE? ', 'modified_lines': 'In this section, we present a preliminary theoretical analysis to understand how the Multi-Power Law might arise. More specifically, we consider a simple setting where SGD optimizes a quadratic loss function with noisy gradients, and show that the Multi-Power Law naturally emerges when the Hessian and noise covariance matrices exhibit certain types of power-law structures. While this analysis does not fully capture the complexity of deep learning, we believe it offers insight into how the Multi-Power Law relates to underlying spectral properties in the optimization landscape. 1Note that LD(i)(t) = LDt(i) (t) for each Ti + 1 ≤ t(i) ≤ Ti+1, as these auxiliary processes for a specific ', 'original_lines': 'In this section, we present a preliminary theoretical analysis to understand how the multi-power law might arise. More specifically, we consider a simple setting where SGD optimizes a quadratic loss function with certain gradient noise, and show that the multi-power law naturally emerges when the Hessian and noise covariance matrices exhibit power-law decay in their eigenvalues. While this analysis does not capture the full complexity of deep learning, we believe it sheds light on how the multi-power law is related to certain power-law structures in the optimization landscape. 2Note that LD(i)(t) = LDt(i) (t) for each Ti + 1 ≤ t(i) ≤ Ti+1, as these auxiliary processes for a specific ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Default Hyperparameter Value ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Table 4: The model series run in all the experiments. Hoffmann et al. (2022) utilizes the number of non-embedding parameters (#Non-embeddings) to count model sizes, while Kaplan et al. (2020) counts the total number of parameters (#Params). The unit of the Parameter is M in this table. ', 'modified_lines': '', 'original_lines': ' Our validation contains two steps: (1) fitting schedule-curve pairs from the training set and (2) pre- dicting the loss curves for schedules in the test set. The training set contains only a single 24,000-step constant and cosine schedule pair, alongside a 16,000-step two-stage schedule of ηB = 0.3ηA. The test set has one 72,000-step constant and cosine schedule, 24,000-step unseen WSD and WSDLD schedules, and 16,000-step two-stage schedules with ηB = 0.1ηA and ηB = 0.6ηA. The details are provided in Table 6. We train Llama2 (Touvron et al., 2023) models of 25M, 100M, and 400M, and 22 Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '7 .', 'after_section': None, 'context_after': 'ηB = 0.6ηA. The peak learning rate is 3 × 10−4, and the ending learning rate is 3 × 10−5 for the co- sine, WSD, and WSDLD schedules. For all two-stage schedules, TA = 8000. All schedules include a warmup phase of 2,160 steps. Detailed descriptions of the training and test sets are summarized in Table 6. Similar to the two-stage fitting, we fit the parametric law using the Huber loss as the objective (Hu- ber, 1992): (26) min (cid:88) t 1 × 10−5 and 1 × 10−6, initialized with parameters from the first optimization. Each optimization λ as a tunable hyperparameter. The input Xt for MTL is the same as MPL’s input. Fol- lowing Tissue et al. (2024), we use L-BFGS to minimize Equation (26), grid-searching λ ∈ {0.95, 0.99, 0.995, 0.999, 0.9995} and selecting the best fit based on training accuracy. Predictions are evaluated across the test set (Table 6), with comparisons to MPL in Table 1 and Figure 14. In Figure 14, we compare them specifically over the WSDLD schedule. In the decay stage, MPL not only achieves higher fitting accuracy but also aligns with the curvature of the loss curve. In contrast, MTL fits the stable stage well but predicts a counterfactual concave curve during the decay stage. Chinchilla Data Scaling Law. The Chinchilla Data Scaling Law (CDSL) is similar to the one- Discussion on the Optimization Method. We also explored the use of the L-BFGS algorithm for fitting MPL but found it highly sensitive to parameter initialization. For instance, under certain ', 'paragraph_idx': 30, 'before_section': None, 'context_before': 'Our validation frames the Multi-Power Law (MPL) fitting as a machine learning task, training on schedule-loss curve pairs from the training set and predicting loss curves for the test set. The training ', 'modified_lines': 'set contains a 24000-step constant and cosine schedule pair, and a 16000-step two-stage schedule with ηB = 0.3ηA. The test set includes a 72000-step constant and cosine schedule, a 24000-step unseen WSD and WSDLD schedule, and 16000-step two-stage schedules with ηB = 0.1ηA and G.2 FITTING THE LAW Huberδ(log LΘ(Xt) − log Lgt(Xt)), Θ where Lgt(Xt) denotes the ground truth of validation loss, LΘ(Xt) is the predicted loss, δ is a hyperparameter for the Huber loss, and Θ denotes parameters to fit. The total fitting loss sums up the Huber loss over the validation steps. In practice, we compute the area under the linearly interpolated 25 0100020003000400050006000Step0.000.020.040.060.080.100.12Loss ReductionAverage Error: 2.75e-06loss reductionpredloss reductionpred510152025304060100A (x105)0100020003000400050006000Step0.000.020.040.060.080.10Loss ReductionAverage Error: 4.99e-06loss reductionpredloss reductionpred47121618222629B (x105) Published as a conference paper at ICLR 2025 Table 7: Parameter values for optimized schedules across different model sizes, rounded to two decimal places. Model Size A B 400M 100M 25M 0.66 0.59 0.51 614.30 521.40 446.40 C 0.16 0.24 2.07 α 0.42 0.46 0.53 β 0.88 0.60 0.41 γ 0.56 0.65 0.52 L0 2.52 2.79 3.17 polyline of the learning rate at validation steps as a surrogate for the LR sum. This approach reduces the computational cost since requiring only step numbers, learning rates, and losses at validation steps. Multi-Power Law. For the Multi-Power Law (MPL), Θ = {A, B, C, α, β, γ, L0}, and Xt = {η1, . . . , ηt}. We use the Adam optimizer to fit the MPL due to its flexibility, with a learning rate of 5 × 10−3 for the index parameters (α, β, and γ) and 5 × 10−2 for the coefficient or constant parameters (A, B, C, and L0). We also perform a second optimization with a learning rate of runs for 5×104 steps, selecting the lowest training loss result. Fitted parameters are listed in Table 7. In the discussion of Appendix C, we also fit simplified MPL or MPL variants in this manner, except for the momentum law (Appendix G.2). In Figure 13, we present the fitting and prediction results for a subset of experiments, with a zoom-in window highlighting predictions near the end of training. In long-horizon experiments, the zoomed-in view reveals slight discrepancies between the MPL predictions and the actual training curves, targeted for future refinement. Momentum Law. For the momentum law (MTL; Appendix C), Θ = {A, B, α, L0}, with power law mentioned in Appendix C, but uses the power of steps instead of the LR sum, with Θ = {A, α, L0}, and Xt = t (final steps only) for Equation (26). The fitting of CDSL follows Hoffmann et al. (2022) and uses the L-BFGS algorithm to minimize the Huber loss. With regard to sample efficiency (Figure 5(a)), CDSL uses cosine curves at 14960, 20080, 27760, 40560, 53360, and 72000 steps, requiring 4.8 times more compute than MPL (two 24000-step curves), with prediction errors of 0.007 (MPL) versus 0.024 (CDSL). MPL achieves less than one-third the prediction error of CDSL. In Figure 5(b), CDSL fits all intermediate steps, ignoring the effect of LR schedule and loss reductions for the comparison with MPL. ', 'original_lines': 'set contains a 24,000-step constant and cosine schedule pair, and a 16,000-step two-stage schedule with ηB = 0.3ηA. The test set includes a 72,000-step constant and cosine schedule, a 24,000-step unseen WSD and WSDLD schedule, and 16,000-step two-stage schedules with ηB = 0.1ηA and H.2 LAW FITTING Huberδ(log Lθ(Xt) − log Lgt(Xt)), θ where Lgt(Xt) denotes the ground truth of validation loss, and Lθ(Xt) is the predicted loss, and δ is a hyperparameter for the Huber loss. The total fitting loss sums up the Huber loss over the validation steps. In practice, we compute the area under the linearly interpolated polyline of the learning rate at validation steps as a surrogate for the LR sum. This approach reduces the computational cost since requiring only step numbers, learning rates, and losses at validation steps. Multi-Power Law. For the Multi-Power Law (MPL), θ = {A, B, C, α, β, γ, L0}, and Xt = {η1, . . . , ηt}. We use the Adam optimizer to fit the MPL due to its flexibility, with a learning rate of 5 × 10−3 for the index parameters (α, β, and γ) and 5 × 10−2 for the coefficient or con- stant parameters (A, B, C, and L0). We also perform a second optimization with a learning rate of runs for 5 × 104 steps, selecting the lowest training loss result. Fitted parameters are listed in Ta- ble 7. In the discussion of Appendix C, we also fit simplified MPL or MPL variants in this manner, except for the momentum law (Appendix H.2). In Figure 13, we present the fitting and prediction results for a subset of experiments, with a zoom-in window highlighting predictions near the end of training. In long-horizon experiments, the zoomed-in view reveals slight discrepancies between the MPL predictions and the actual training curves, targeted for future refinement. Momentum Law. For the momentum law (MTL; Appendix C), θ = {A, B, α, L0}, with 25 Published as a conference paper at ICLR 2025 Figure 13: Fitting and Prediction Details. Subfigures depict loss curve fitting (training set) and prediction (test set) across various configurations, labeled as (X, Y ) for row X, column Y . The columns in the accom- panying table indicate: F/P for Fitting (F) or Prediction (P), Model Size, Step Length, and Learning Rate Schedule. Subfigure details follow: (X, Y ) F/P Model Size Step Length LR Schedule (1, 1) (1, 2) (2, 1) (2, 2) (3, 1) (3, 2) F F P P P P 25M 400M 25M 400M 100M 100M 24,000 16,000 16,000 72,000 24,000 72,000 Cosine 2-stage (3 × 10−4 → 9 × 10−5) 2-stage (3 × 10−4 → 1.8 × 10−4) Cosine WSD Constant 26 7500100001250015000175002000022500Step3.303.353.403.453.503.55Loss200002100022000230003.303.323.34Learning RateLossMulti-powerOne-power0.51.01.52.02.5Learning Rate (x104)6000800010000120001400016000Step2.852.902.953.003.05Loss1300013500140001450015000155002.842.862.88Learning RateLossMulti-powerOne-power1.01.52.02.53.0Learning Rate (x104)7000800090001000011000120001300014000Step3.4003.4253.4503.4753.5003.5253.550Loss1150012000125001300013500140003.393.403.413.423.43Learning RateLossMulti-powerOne-power1.82.02.22.42.62.83.0Learning Rate (x104)10000200003000040000500006000070000Step2.62.72.82.93.0Loss60000625006500067500700002.602.622.642.662.68Learning RateLossMulti-powerOne-power0.51.01.52.02.53.0Learning Rate (x104)7500100001250015000175002000022500Step2.953.003.053.103.153.203.25Loss200002100022000230002.942.962.983.003.023.04Learning RateLossMulti-powerOne-power0.51.01.52.02.53.0Learning Rate (x104)10000200003000040000500006000070000Step2.953.003.053.103.153.203.25Loss60000625006500067500700002.9352.9402.9452.950Learning RateLossMulti-powerOne-power2.852.902.953.003.053.103.15Learning Rate (x104) Published as a conference paper at ICLR 2025 Table 7: Parameter values for optimized schedules across different model sizes, rounded to two decimal places. Model Size A B 400M 100M 25M 0.66 0.59 0.51 614.30 521.40 446.40 C 0.16 0.24 2.07 α 0.42 0.46 0.53 β 0.88 0.60 0.41 γ 0.56 0.65 0.52 L0 2.52 2.79 3.17 (a) Random Seed Ablation (b) Comparison with Momentum Law Figure 14: (a) Experiments with a 25M model over 24,000 steps across different seeds, showing a final loss standard deviation of 0.0007 and a maximum gap of 0.0014. (b) Comparison between Multi-Power Law (MPL) and Momentum Law (MTL). In the decay stage, MPL achieves higher fitting accuracy and matches the curvature of the loss curve, whereas MTL fits the stable stage but predicts a counterfactual concave curve during the decay stage. power law mentioned in Appendix C, but uses the power of steps instead of the LR sum, with θ = {A, α, L0}, and Xt = t (final steps only) for Equation (26). The fitting of CDSL follows Hoffmann et al. (2022) and uses the L-BFGS algorithm to minimize the Huber loss. With regard to sample efficiency (Figure 5(a)), CDSL uses cosine curves at 14,960, 20,080, 27,760, 40,560, 53,360, and 72,000 steps, requiring 4.8 times more compute than MPL (two 24,000-step curves), with prediction errors of 0.007 (MPL) versus 0.024 (CDSL). MPL achieves less than one-third the prediction error of CDSL. In Figure 5(b), CDSL fits all intermediate steps, ignoring the effect of LR schedule and loss reductions for the comparison with MPL. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Lemma 1. For a function f (x) = x − M (1 − λx) with M > 0 and 0 < λ < 1, we have the following properties: ', 'paragraph_idx': 8, 'before_section': None, 'context_before': 't = 0. ', 'modified_lines': 'For convenience, we first prove the following lemma. ', 'original_lines': 'For the convenience, we first prove the following lemma. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 −', 'after_section': None, 'context_after': 'dx (0). dx (0) ≥ 0, df The above discussion completes the proof. Next, we prove Theorem 2. Proof for Theorem 2. First, we reparameterize ηt as ηt = η0 − (cid:80)t problem (A) becomes ', 'paragraph_idx': 27, 'before_section': None, 'context_before': 'd2f dx2 (x) = M λx(log λ)2 > 0. ', 'modified_lines': 'Therefore, f (x) is strictly convex. Then we can discuss the property of f (x) over x ∈ (0, ∞) by discussing df (1) When df dx (0) ≥ 0 for all x′ ∈ (0, x). Thus, f (x) is monotonically increasing over (0, x) and f (x) > f (0) = 0. So x = 0 is the unique minimizer over x ∈ [0, ∞) and f (x) ≥ f (y) when x ≥ y ≥ 0. dx (x′) > df (2) When df df dx (0) < 0, limx→∞ dx (x) = 1. Then there exists x∗ ∈ (0, ∞) such that df dx (x∗) = 0. Thus, f (x) monotonically decreases over (0, x∗) and monotonically increases over (x∗, ∞). Hence x∗ is the unique minimizer over x ∈ [0, ∞). Moreover, f (x∗) < f (0) = 0 and limx→∞ f (x) = ∞, so there exists ˜x ∈ (x∗, ∞), such that f (x) < 0 over (0, ˜x) and f (x) > 0 over (˜x, ∞). Clearly, f (x) monotonically increases over x ∈ [˜x, ∞). Therefore, if f (y) ≥ 0 for some y ∈ [0, ∞), then y ≥ ˜x and f (x) ≥ f (y) for all x ∈ [y, ∞). 35 Published as a conference paper at ICLR 2025 ', 'original_lines': 'Then we can discuss the property of f (x) over x ∈ (0, ∞) by discussing df 1. When df f (0) = 0. dx (x) > df dx (0) ≥ 0. Thus, f (x) is monotonically increasing and f (x) > 2. When df dx (0) < 0, then there exists x∗ ∈ (0, ∞) such that df dx (x∗) = 0. Thus, f (x) mono- tonically decreases over (0, x∗) and monotonically increases over (x∗, ∞). Hence x∗ is the only minimal over (0, ∞). Moreover, f (x∗) < f (0) = 0 and limx→∞ f (x) = ∞, so there exists ˜x ∈ (x∗, ∞), such that f (x) < 0 over (0, ˜x) and f (x) > 0 over (˜x, ∞). Clearly, f (x) monotonically increases over x ∈ [˜x, ∞). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '• Complementary Slackness: λt∆t = 0 for all t = 1, . . . , T and µ ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(cid:80)T i=1 ∆t ≤ η0, respectively. By Karush-Kuhn-Tucker (KKT) conditions, there exist λ1, . . . , λT ≥ 0 and µ ≥ 0 such that the following conditions hold: ', 'modified_lines': '', 'original_lines': ' 33 Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'd (cid:88) ', 'paragraph_idx': 10, 'before_section': '1 INTRODUCTION', 'context_before': '(ηk−1 − ηk) ', 'modified_lines': 'k=1 ', 'original_lines': 'k=2 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 10}, {'section': '3 EMPIRICAL DERIVATION OF THE MULTI-POWER LAW', 'after_section': None, 'context_after': '|E[L(θT )] − M (θ0, E)| ≤ 5ηmax i S1 exp(−2λiS1)θ2 λ3 max d ', 'paragraph_idx': 22, 'before_section': None, 'context_before': 'where Sk := (cid:80)T ', 'modified_lines': 'τ =k ητ . The estimation error is bounded as 0,i + d (cid:88) where ηmax := max0≤k≤T ηk. i=1 15 2 η2 ', 'original_lines': 'τ =k ητ , and the estimation error is bounded as 0,i + 5 exp(2)η2 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Then we can rewrite ¯U11(θk−1) as ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'kλ2 i ). ', 'modified_lines': '', 'original_lines': ' 36 Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-15 11:51:20
ICLR.cc/2025/Conference
tMqFytuokK
khimCcyszn
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'schedule bears some resemblance to the recently proposed Warmup-Stable-Decay (WSD) schedule (Hu et al., 2024) but achieves a slightly lower final loss. We be- lieve these results could offer valuable insights for understanding the dynamics of ', 'modified_lines': 'pretraining and designing learning rate schedules to improve efficiency.1 ', 'original_lines': 'pretraining and designing learning rate schedules to improve efficiency. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'but in the long term, it may cause overshooting and oscillation along sharp directions on the loss †Corresponding authors. ', 'modified_lines': '1Code Implementation: https://github.com/thu-yao-01-luo/MultiPowerLaw ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 5}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'Existing scaling laws sidestep the complexity of LR schedules by fitting parameters on a fixed fam- ily of LR schedules. For instance, Hoffmann et al. (2022) fitted the parameters in the Chinchilla ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'then gradually reducing it over time, following a Learning Rate schedule (LR schedule) (Bengio, 2012). These LR schedules sometimes include a warmup phase at the beginning, where the LR linearly increases from zero to a large value over a few thousand steps, and only after this warmup ', 'modified_lines': 'phase does the LR start to decay. The most commonly used LR schedule in LLM pretraining is the cosine schedule (Loshchilov & Hutter, 2017), which decays the LR following a cosine curve. Other schedules include the cyclic (Smith, 2017), Noam (Vaswani et al., 2017), and Warmup-Stable-Decay (WSD) schedules (Hu et al., 2024), but there is no consensus on the optimal choice. ', 'original_lines': 'phase does the LR start to decay. The most commonly used LR schedule in LLM pretraining is the cosine schedule (Loshchilov & Hutter, 2016), which decays the LR following a cosine curve. Other schedules include the cyclic (Smith, 2017), Noam (Vaswani, 2017), and Warmup-Stable- Decay (WSD) schedules (Hu et al., 2024), but there is no consensus on the optimal choice. ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 5}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'ηt = 1+α T ). Here, ηmax is the peak LR and α is usually set to 0.1. The Warmup-Stable-Decay (WSD) schedule (Hu et al., 2024) is a recently proposed LR schedule. This ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'Learning Rate Schedule. A learning rate (LR) schedule is a sequence E := {η1, . . . , ηT } that specifies the LR at each step of the training process. For language model pretraining, the cosine ', 'modified_lines': 'LR schedule (Loshchilov & Hutter, 2017) is the most popular schedule, which can be expressed as ', 'original_lines': 'LR schedule (Loshchilov & Hutter, 2016) is the most popular schedule, which can be expressed as ', 'after_paragraph_idx': None, 'before_paragraph_idx': 5}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '6 CONCLUSIONS ', 'paragraph_idx': 5, 'before_section': None, 'context_before': 'the cosine schedules, showing practical gains from loss improvements. Ablation details for longer horizons and larger models are in Appendix H. ', 'modified_lines': 'Optimized LR Schedule Outperforms Tuned WSD Variants. Our optimized schedules lead to smaller final loss than the WSD and WSDLD schedules proposed in Hu et al. (2024). For a 400M model, we find that the decay step of a 24000-step optimized schedule (Figure 1) is close to the optimally tuned step (∼6000) for these WSD schedules, determined via grid search over {3000, 4000, 5000, 6000, 7000}. However, even when the decay ratios of WSD and WSDLD schedules are optimally tuned, our optimized schedule still outperforms them. Further, we analyze key differences between our optimized schedule and these two WSD schedules as follows. The optimized schedule decays to below 1/20 of the peak LR, even approaching to zero, while WSD schedules decay linearly or exponentially to 1/10 of the peak LR. However, simply adjusting the ending LR to near-zero (Appendix H) does not close the gap. Another key difference is the decay function: we find through symbolic regression that in the decay phase, the optimized schedule roughly follows a power decay function rather than a linear or exponential decay: ηt ≈ ηmax · (1 − τ )1.5, where τ is the step number in the decay phase, normalized to [0, 1]. Motivated by this, we propose a WSD variant with sqrt-cube decay (WSDSC), which decays the LR exactly as ηt = ηmax · (1 − τ )1.5. WSDSC is effective across various model sizes and architectures and outperforms the WSD schedule, as shown in Figures 6(b) and 15(a), though it still falls behind our optimized schedule. See Appendix H for details. ', 'original_lines': 'Optimized LR Schedule Outperforms Tuned WSD Variants. Our optimized schedules lead to smaller final loss than the WSD and WSDLD schedules proposed in Hu et al. (2024). For a 400M model, we find that the decay step of a 24000-step optimized schedule (Figure 1) is close to the optimally tuned step (∼6000) for these WSD schedules, determined via grid search over {3000, 4000, 5000, 6000, 7000}. However, even when the decay ratios of WSD and WSDLD schedules are optimally tuned, our optimized schedule still outperforms them. Further, we analyze key differences between our optimized schedule and these two WSD schedules as follows. The optimized schedule decays to below 1/20 of the peak LR, even approaching to zero, while WSD schedules decay linearly or exponentially to 1/10 of the peak LR. However, simply adjusting the ending LR to near-zero (Appendix H) does not close the gap. Another key difference is the decay function: through symbolic regression, we find that in the decay phase, the optimized schedule roughly follows a power decay function rather than linear or exponential decay: ηt ≈ ηmax · (1 − τ )1.5, where τ is the step number in the decay phase, normalized to [0, 1]. Motivated by this, we propose a WSD variant with sqrt-cube decay (WSDSC), which decays the LR exactly as ηt = ηmax · (1 − τ )1.5. WSDSC is effective across various model sizes and architectures and outperforms the WSD schedule, as evidenced in Figures 6(b) and 15(a), though it still falls behind our optimized schedule. See Appendix H for details. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 EMPIRICAL DERIVATION OF THE MULTI-POWER LAW', 'after_section': '3 EMPIRICAL DERIVATION OF THE MULTI-POWER LAW', 'context_after': '∆η(i) = η(i−1) −η(i), is also measurable. Using this, we estimate the shape of LD(i)(t) for different stages. Regard Ti as TA in the two-stage case and define x := t − Ti. As shown in Figure 9(a), LD(i)(Ti + x) approximately conforms to a similar power law as (14) for the two-stage case: ', 'paragraph_idx': 19, 'before_section': None, 'context_before': 'In the multi-stage schedule, given stage index 1 ≤ i ≤ n, the Stage-Wise Loss Reduction. ', 'modified_lines': 'stage-wise loss reduction is defined as LD(i)(t) = LDTi(t)2. The LR reduction between stages, ', 'original_lines': 'stage-wise loss reduction is defined as LD(i)(t) = LDTi(t)1. The LR reduction between stages, ', 'after_paragraph_idx': 19, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Model Architectures. We validate MPL’s generalizability across GPT-2 (Radford et al., 2019) and OLMo (Groeneveld et al., 2024) models to evaluate the generalizability of the MPL across model architectures. For GPT-2, we go through the simplified experiments, fitting the MPL on cosine and ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'peak learning rates, batch sizes, and random seeds, incorporating both self-conducted and open- source experimental results. ', 'modified_lines': '', 'original_lines': '26 Published as a conference paper at ICLR 2025 Figure 13: Fitting and Prediction Details. Subfigures depict loss curve fitting (training set) and prediction (test set) across various configurations, labeled as (X, Y ) for row X, column Y . The columns in the accom- panying table indicate: F/P for Fitting (F) or Prediction (P), Model Size, Step Length, and Learning Rate Schedule. Subfigure details follow: (X, Y ) F/P Model Size Step Length LR Schedule (1, 1) (1, 2) (2, 1) (2, 2) (3, 1) (3, 2) F F P P P P 25M 400M 25M 400M 100M 100M 24000 16000 16000 72000 24000 72000 Cosine 2-stage (3 × 10−4 → 9 × 10−5) 2-stage (3 × 10−4 → 1.8 × 10−4) Cosine WSD Constant 27 7500100001250015000175002000022500Step3.303.353.403.453.503.55Loss200002100022000230003.303.323.34Learning RateLossMulti-powerOne-power0.51.01.52.02.5Learning Rate (x104)6000800010000120001400016000Step2.852.902.953.003.05Loss1300013500140001450015000155002.842.862.88Learning RateLossMulti-powerOne-power1.01.52.02.53.0Learning Rate (x104)7000800090001000011000120001300014000Step3.4003.4253.4503.4753.5003.5253.550Loss1150012000125001300013500140003.393.403.413.423.43Learning RateLossMulti-powerOne-power1.82.02.22.42.62.83.0Learning Rate (x104)10000200003000040000500006000070000Step2.62.72.82.93.0Loss60000625006500067500700002.602.622.642.662.68Learning RateLossMulti-powerOne-power0.51.01.52.02.53.0Learning Rate (x104)7500100001250015000175002000022500Step2.953.003.053.103.153.203.25Loss200002100022000230002.942.962.983.003.023.04Learning RateLossMulti-powerOne-power0.51.01.52.02.53.0Learning Rate (x104)10000200003000040000500006000070000Step2.953.003.053.103.153.203.25Loss60000625006500067500700002.9352.9402.9452.950Learning RateLossMulti-powerOne-power2.852.902.953.003.053.103.15Learning Rate (x104) Published as a conference paper at ICLR 2025 (a) Random Seed Ablation (b) Comparison with Momentum Law Figure 14: (a) Experiments with a 25M model over 24000 steps across different seeds, showing a final loss standard deviation of 0.0007 and a maximum gap of 0.0014. (b) Comparison between Multi-Power Law (MPL) and Momentum Law (MTL). In the decay stage, MPL achieves higher fitting accuracy and matches the curva- ture of the loss curve, whereas MTL fits the stable stage but predicts a counterfactual concave curve during the decay stage. (a) Loss Curve Comparison for GPT-2 (b) Long-Horizon Prediction for GPT-2 Figure 15: Loss curves of GPT-2 models with Multi-Power Law fitted over 24000-step constant and cosine (a) Comparison between the cosine, WSD, and WSDSC schedules (see Section 5.2); (b) schedule losses. Prediction for a 72000-step cosine schedule loss curve. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'high accuracy, with R2 values exceeding 0.9970 across all cases, as illustrated in Figure 17. These experiments indicate that, while the coefficients of MPL are batch-size dependent, the functional form of MPL remains robust across varying batch size configurations ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'results are shown in Figure 16. Batch Size We extend experiments to batch sizes of 64 and 256 sequences on 25M models, com- plementing the prior 128-sequence (0.5M) results, with sequence length of 4096. MPL maintains ', 'modified_lines': '', 'original_lines': ' 28 510152025Step(×103)3.43.63.84.0Loss22.022.523.023.524.03.31003.31253.31503.31753.3200Seed 45018Seed 337Seed 1660500010000150002000025000Step0.51.01.52.02.53.0Learning Rate (x104)WSDLD3.03.23.43.63.8Loss1800020000220002.953.003.053.10LossMulti-powerOne-powerMomentum500010000150002000025000Step0.00.51.01.52.02.53.0Learning Rate (x104)CosineWSDWSDSCLossLR3.253.503.754.004.254.50Loss1800019000200002100022000230003.253.3010000200003000040000500006000070000Step3.23.33.43.53.6Loss60000625006500067500700003.143.163.183.203.22Learning RateLossMulti-powerOne-power0.51.01.52.02.53.0Learning Rate (x104) Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Next, we prove Theorem 2. Published as a conference paper at ICLR 2025 ', 'paragraph_idx': 9, 'before_section': None, 'context_before': 'over (˜x, ∞). Clearly, f (x) monotonically increases over x ∈ [˜x, ∞). Therefore, if f (y) ≥ 0 for some y ∈ [0, ∞), then y ≥ ˜x and f (x) ≥ f (y) for all x ∈ [y, ∞). ', 'modified_lines': 'This completes the proof. 36 ', 'original_lines': 'The above discussion completes the proof. 35 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-16 09:37:47
ICLR.cc/2025/Conference
khimCcyszn
iXlq1YeGWo
[]
2025-03-19 09:02:10
ICLR.cc/2025/Conference
iXlq1YeGWo
fzmItFqM6V
[]
2025-05-09 08:12:02
ICLR.cc/2025/Conference
fzmItFqM6V
Ggs1wbsWAK
[{'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'schedules and their corresponding loss curves in the training set to fit the law. The fitted MPL can then predict loss curves for test schedules with different shapes and extended horizons. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0.0100 0.0070 ', 'modified_lines': '', 'original_lines': '(a) Fitting Sample Efficiency Comparison (b) Whole Curve Fitting Comparison Figure 5: (a) Target loss predictions at 128000-step for a cosine schedule using MPL and CDSL fitting, with a 400M model. CDSL fitting requires six cosine losses (Loss Curve(C)) from 14,960 steps to 72000 steps but relies solely on their final losses (Loss Ends(C)). In contrast, MPL leverages the entire 24000-step constant and cosine loss curves (Loss Curves(M)). Final loss predictions are denoted as Pred(C) for CDSL and Pred(M) for MPL respectively. (b) Comparison of MPL and CDSL fittings on the whole loss curve of the open-source 7B OLMo model, trained with a linear schedule. ', 'after_paragraph_idx': 2, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'distinct advantages: (1) MPL incorporates LR dependency, unlike CDSLs, and (2) MPL predicts the entire loss curve, whereas CDSLs are limited to estimating only the final loss. These advantages enable MPL to achieve higher sample efficiency than CDSLs. Notably, we demonstrate that a single ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Comparison with Chinchilla Law. While Chinchilla-style data scaling laws, which we abbreviate as CDSLs, are widely adopted (Muennighoff et al., 2023; Hoffmann et al., 2022), MPL offers several ', 'modified_lines': '', 'original_lines': ' 8 024681012Step (x104)2.62.83.03.2Loss12.5012.7513.002.5502.575Loss Curves(C)Loss Curves(M)Loss Ends(C)Loss Ends(M)Pred(C)Pred(M)Loss Curve(Test)Target Loss012345Step (x105)2.02.12.22.32.42.5LossLossMulti-powerChinchilla0.51.01.52.02.53.0Learning Rate (x104)Linear Schedule Published as a conference paper at ICLR 2025 Table 2: Downstream performance comparison for the cosine and our optimized schedules. Percent- age changes (↑ or ↓) indicate relative improvements or regressions compared to the cosine schedule. Schedule LAMBADA HellaSwag PIQA ARC-E C3 RTE Cosine Optimized 46.54 48.71 (↑ 2.17%) 37.12 37.74 (↑ 0.62%) 65.13 65.07 (↓ 0.06%) 43.56 44.09 (↑ 0.53%) 48.44 50.30 (↑ 1.86%) 52.71 53.79 (↑ 1.08%) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 THE MULTI-POWER LAW INDUCES BETTER LR SCHEDULES', 'after_section': None, 'context_after': '6 CONCLUSIONS ', 'paragraph_idx': 38, 'before_section': '5 THE MULTI-POWER LAW INDUCES BETTER LR SCHEDULES', 'context_before': 'in the decay phase, normalized to [0, 1]. Motivated by this, we propose a WSD variant with sqrt-cube decay (WSDSC), which decays the LR exactly as ηt = ηmax · (1 − τ )1.5. WSDSC is effective across various model sizes and architectures and outperforms the WSD schedule, as shown in Figures 6(b) ', 'modified_lines': 'and 13(a), though it still falls behind our optimized schedule. See Appendix H for details. ', 'original_lines': 'and 15(a), though it still falls behind our optimized schedule. See Appendix H for details. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 38}, {'section': '2 ηmax cos( πt', 'after_section': None, 'context_after': 'due to several simplifications in our derivation: (1) the coefficient β remains constant across different LR scales; (2) the intermediate loss reduction does not depend on the LR prefix; (3) variations in LR during the warm-up phase are ignored. ', 'paragraph_idx': 16, 'before_section': None, 'context_before': 'training runs compared to existing scaling laws. Furthermore, our MPL is accurate enough to be used for optimizing schedules, and we extensively validate the superiority of our optimized schedules over commonly used ones. However, we do observe slight deviations between our predictions and actual ', 'modified_lines': 'training curves, especially for long-horizon and high peak LR cases like in Figures 15 and 16. likely ', 'original_lines': 'training curves, especially for long-horizon and high peak LR cases like in Figures 13 and 16. likely ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1–67, 2020. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'optimal neural scaling laws. Processing Systems, 2024. ', 'modified_lines': '', 'original_lines': 'Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 EMPIRICAL DERIVATION OF THE MULTI-POWER LAW', 'after_section': None, 'context_after': '(b) Loss reduction at B: LD(TA + xB) = LA(tA) − LB(tB). ', 'paragraph_idx': 26, 'before_section': None, 'context_before': 'xB = 3000, ηB = 9 × 10−5, ηA = 3 × 10−4, TA = 8000. (a) A and B have the equal LR sums: xA = 900, tA = 8900. (c) Fitting loss reduction ', 'modified_lines': '(cid:99)LD(TA + xB) with power form results in 0.13(1 − (1 + 0.21x)−0.15); Fitting with exponential form results in 0.0790(1 − e−0.01x). The shape of loss reduction is closer to a power form than exponential. ', 'original_lines': '(cid:99)LD(TA + xB) with power form results in 0.13(1 − (1 + 0.21x)0.15); Fitting with exponential form results in 0.0790(1 − e−0.01x). The shape of loss reduction is closer to a power form than exponential. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Assumption 1. For all 1 ≤ i ≤ d, the marginal distribution of (λi, Σii, ∆i) is a fixed distribution p(λ, E, ∆) with following properties: ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'i (θ0 − θ∗)). We consider a scenario where Hessian, noise covariance, and initial point are drawn from certain distributions before training, and make the following assumptions: ', 'modified_lines': ' ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Discussion on the Optimization Method. We also explored the use of the L-BFGS algorithm for fitting MPL but found it highly sensitive to parameter initialization. For instance, under certain initializations, the fitted parameters may include a high β value and a near-zero C. Note that 1 − ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'of CDSL. In Figure 5(b), CDSL fits all intermediate steps, ignoring the effect of LR schedule and loss reductions for the comparison with MPL. ', 'modified_lines': '', 'original_lines': '27 Published as a conference paper at ICLR 2025 Figure 13: Fitting and Prediction Details. Subfigures depict loss curve fitting (training set) and prediction (test set) across various configurations, labeled as (X, Y ) for row X, column Y . The columns in the accom- panying table indicate: F/P for Fitting (F) or Prediction (P), Model Size, Step Length, and Learning Rate Schedule. Subfigure details follow: (X, Y ) F/P Model Size Step Length LR Schedule (1, 1) (1, 2) (2, 1) (2, 2) (3, 1) (3, 2) F F P P P P 25M 400M 25M 400M 100M 100M 24000 16000 16000 72000 24000 72000 Cosine 2-stage (3 × 10−4 → 9 × 10−5) 2-stage (3 × 10−4 → 1.8 × 10−4) Cosine WSD Constant 28 7500100001250015000175002000022500Step3.303.353.403.453.503.55Loss200002100022000230003.303.323.34Learning RateLossMulti-powerOne-power0.51.01.52.02.5Learning Rate (x104)6000800010000120001400016000Step2.852.902.953.003.05Loss1300013500140001450015000155002.842.862.88Learning RateLossMulti-powerOne-power1.01.52.02.53.0Learning Rate (x104)7000800090001000011000120001300014000Step3.4003.4253.4503.4753.5003.5253.550Loss1150012000125001300013500140003.393.403.413.423.43Learning RateLossMulti-powerOne-power1.82.02.22.42.62.83.0Learning Rate (x104)10000200003000040000500006000070000Step2.62.72.82.93.0Loss60000625006500067500700002.602.622.642.662.68Learning RateLossMulti-powerOne-power0.51.01.52.02.53.0Learning Rate (x104)7500100001250015000175002000022500Step2.953.003.053.103.153.203.25Loss200002100022000230002.942.962.983.003.023.04Learning RateLossMulti-powerOne-power0.51.01.52.02.53.0Learning Rate (x104)10000200003000040000500006000070000Step2.953.003.053.103.153.203.25Loss60000625006500067500700002.9352.9402.9452.950Learning RateLossMulti-powerOne-power2.852.902.953.003.053.103.15Learning Rate (x104) Published as a conference paper at ICLR 2025 (a) Random Seed Ablation (b) Comparison with Momentum Law Figure 14: (a) Experiments with a 25M model over 24000 steps across different seeds, showing a final loss standard deviation of 0.0007 and a maximum gap of 0.0014. (b) Comparison between Multi-Power Law (MPL) and Momentum Law (MTL). In the decay stage, MPL achieves higher fitting accuracy and matches the curva- ture of the loss curve, whereas MTL fits the stable stage but predicts a counterfactual concave curve during the decay stage. (a) Loss Curve Comparison for GPT-2 (b) Long-Horizon Prediction for GPT-2 Figure 15: Loss curves of GPT-2 models with Multi-Power Law fitted over 24000-step constant and cosine (a) Comparison between the cosine, WSD, and WSDSC schedules (see Section 5.2); (b) schedule losses. Prediction for a 72000-step cosine schedule loss curve. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'and cosine schedules for GPT. (cid:16) Ttotal−t Ttotal−Tstable Published as a conference paper at ICLR 2025 ', 'paragraph_idx': 7, 'before_section': None, 'context_before': '. Validation experiments on 1B Llama2 and GPT models confirm its efficacy: Figure 6(b) shows that WSDSC outperforms the cosine schedule for the 1B model, though it falls short of the MPL- ', 'modified_lines': 'optimized schedule. Figure 13(a) demonstrates WSDSC’s superiority over both the standard WSD 34 ', 'original_lines': 'optimized schedule. Figure 15(a) demonstrates WSDSC’s superiority over both the standard WSD 35 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-05-09 09:05:10
ICLR.cc/2025/Conference
uU3wDlmwJU
Fl4L3qj2cR
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 INTRODUCTION (LLMs) (OpenAI, 2022; 2023) with the ability to understand visual information, exhibiting a com- mon trend toward low-latency responses to enable real-time multimodal interactions. Recently, the most widely adopted LMMs (Liu et al., 2023b; 2024a; Zhu et al., 2024), exemplified by the LLaVA ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'LLaVA-Mini, an efficient LMM with minimal vision tokens. To achieve a high compression ratio of vision tokens while preserving visual information, we first analyze how LMMs understand vision tokens and find that most vision tokens ', 'modified_lines': 'only play a crucial role in the early layers of LLM backbone, where they mainly fuse visual information into text tokens. Building on this finding, LLaVA-Mini introduces modality pre-fusion to fuse visual information into text tokens in ad- vance, thereby facilitating the extreme compression of vision tokens fed to LLM backbone into one token. LLaVA-Mini is a unified large multimodal model that can support the understanding of images, high-resolution images, and videos in an efficient manner. Experiments across 11 image-based and 7 video-based bench- marks demonstrate that LLaVA-Mini outperforms LLaVA-v1.5 with just 1 vision token instead of 576. Efficiency analyses reveal that LLaVA-Mini can reduce FLOPs by 77%, deliver low-latency responses within 40 milliseconds, and pro- cess over 10,000 frames of video on the GPU hardware with 24GB of memory.1 Large multimodal models (LMMs), such as GPT-4o (OpenAI, 2024), equip large language models ', 'original_lines': 'only play a crucial role in the early layers, where they fuse visual information into text tokens. Building on this finding, LLaVA-Mini introduces modality pre-fusion to fuse visual information into text tokens in advance, thereby facilitating the ex- treme compression of vision tokens fed to LLM backbone into one token. LLaVA- Mini can support the understanding of images, high-resolution images, and videos in an efficient manner. Experiments across 11 image-based and 7 video-based benchmarks demonstrate that LLaVA-Mini outperforms LLaVA-v1.5 with just 1 vision token instead of 576. Efficiency analyses reveal that LLaVA-Mini can re- duce FLOPs by 77%, deliver low-latency responses within 40 milliseconds, and process over 10,000 frames of video on GPU hardware with 24GB of memory.1 Large multimodal models (LMMs), such as GPT-4o (OpenAI, 2024a), equip large language models ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': 'The computational demands of LMMs are primarily driven by model scale and the number of tokens in the input context. Existing approaches to improving LMM efficiency typically focus on model downsizing (Chu et al., 2023; 2024; Yuan et al., 2024a; Zhou et al., 2024a) or quantization techniques (Yuan et al., 2024b), but often overlook another critical avenue: reducing the number of vision tokens to shorten the input context. Some token reduction methods rely on predefined rules to reduce the ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'corporate a large number of additional vision tokens into the LLM’s context to represent visual information (Liu et al., 2023b), significantly increasing computational complexity. For instance, in the widely used vision encoder CLIP ViT-L/336px, a single image is encoded into 24 × 24 = 576 ', 'modified_lines': 'vision tokens (Radford et al., 2021), where integrating such a large number of vision tokens into ∗Corresponding author: Yang Feng. 1Code: https://github.com/ictnlp/LLaVA-Mini; Model: https://huggingface.co/ ICTNLP/llava-mini-llama-3.1-8b 1 Published as a conference paper at ICLR 2025 the context of parameter-heavy LLM results in significant computational overhead and higher infer- ence latency. This issue becomes even more pronounced in high-resolution image modeling (which requires more vision tokens per image) (Liu et al., 2024b) or video processing (which involves pro- cessing more images) (Maaz et al., 2024; Lin et al., 2023a). Therefore, developing efficient LLMs is essential for achieving GPT-4o-like low-latency multimodal interactions. ', 'original_lines': 'vision tokens (Radford et al., 2021), where integrating such a large number of vision tokens into the context of parameter-heavy LLM results in significant computational overhead and higher inference latency. This issue becomes even more pronounced in high-resolution image modeling (which re- quires more vision tokens per image) (Liu et al., 2024b; Gen Luo, 2024) or video processing (which involves processing more images) (Maaz et al., 2024; Lin et al., 2023a). Therefore, developing efficient LLMs is essential for achieving GPT-4o-like low-latency multimodal interactions. 1Code is provided in supplementary materials. The model will be released after being de-anonymized. 1 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '2 110100576#Vision Tokens/Image5254565860626466MMBench Accuracy (%)FLOPs: 77.1% reductionSpeed: 2.92× fasterVRAM/Image: 360MB 0.6MBLatency: 113ms 40msLLaVA-v1.5LLaVA-MiniImage/Video LMMsImage LMMsLLaVA-v1.5LLaVA-v1.5 (token avg)LLaVA-PruMergeVoCo-LLaMALLaVA-TokenPackerMQT-LLaVAVideo-LLaVALLaMA-VIDLLaVA-Mini Another efficiency determinant for LMMs is the context length provided to the LLM backbone, For video-based LMMs, Video-ChatGPT (Maaz et al., 2024), VideoChat (Li et al., 2024c), Video- LLaVA (Lin et al., 2023a), and Video-LLaMA (Zhang et al., 2023), select a fixed number of frames from videos of varying lengths. MovieChat (Song et al., 2024a) applies memory techniques to Previous methods have primarily focused on token reduction on the vision encoder. LLaVA-Mini takes this a step further by exploring how vision tokens and text tokens interact within the LLM ', 'paragraph_idx': 10, 'before_section': None, 'context_before': '2 RELATED WORK ', 'modified_lines': 'As Large multimodal models (LMMs) are increasingly deployed in real-time applications (OpenAI, 2024), enhancing their efficiency has become a critical concern. Recent efforts focus on either Published as a conference paper at ICLR 2025 reducing the model size or the number of tokens that fed into LMM. To reduce LMM’s model size, previous methods directly replace the LLM backbone with a smaller one (Chu et al., 2023; 2024; Yuan et al., 2024a; Zhou et al., 2024a), while directly reducing the parameter scale can impact the LLM backbone’s capabilities, resulting in performance declines in visual tasks (Shang et al., 2024). including vision and text tokens. In practice, the number of vision tokens can be substantial, partic- ularly when processing high-resolution images and videos. For image-based LMMs, token merging (Bolya et al., 2023), PruMerge (Shang et al., 2024), and TokenPacker (Li et al., 2024e) aggregate vision tokens based on similarity. Qwen-VL (Bai et al., 2023) and MQT-LLaVA (Hu et al., 2024) utilize Q-former (Li et al., 2023a) to compress vision tokens into a fixed length. However, directly reducing vision tokens inevitably results in the loss of visual information (Fan et al., 2024). condense videos into a fixed-length representation. Such frame selection or merging methods may lose some key frames or misunderstand the temporal information of the video (Zhou et al., 2024b). ', 'original_lines': 'Large multimodal models (LMMs) (OpenAI, 2024b; Liu et al., 2023b; 2024a; Zhu et al., 2024) use vision encoder to convert the image into vision tokens, which are then processed by large language models (LLMs) (OpenAI, 2023; Chiang et al., 2023; Touvron et al., 2023a;b) to facilitate visual understanding. As LMMs are increasingly deployed in real-time applications (OpenAI, 2024a), enhancing their efficiency has become a critical concern. Recent efforts focus on either reducing the model size or the number of tokens that fed into LMM. To reduce LMM’s model size, previous methods directly replace the LLM backbone with a smaller one (Chu et al., 2023; 2024; Yuan et al., 2024a; Zhou et al., 2024a), thereby lowering the total parameters. Quantization techniques (Yuan et al., 2024b) can also be applied to improve LMM 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 Under review as a conference paper at ICLR 2025 efficiency. However, reducing the scale and precision of parameters can impact the LLM backbone’s capabilities, resulting in performance declines in visual tasks (Shang et al., 2024). which includes both vision and text tokens. In practice, the number of vision tokens can be sub- stantial, particularly when processing high-resolution images and videos. For image-based LMMs, token merging (Bolya et al., 2023), PruMerge (Shang et al., 2024), and TokenPacker (Li et al., 2024e) aggregate vision tokens based on similarity, achieving compression rates of 20% to 50%. Qwen-VL (Bai et al., 2023) and MQT-LLaVA (Hu et al., 2024) utilize Q-former (Li et al., 2023a) or resampler to compress vision tokens into a fixed length. However, these methods directly reduce vision tokens, and inevitably result in the loss of visual information (Fan et al., 2024). condense videos into a fixed-length representation. VideoLLM-online (Chen et al., 2024) process long video with extracting11 1 token per frame. Such frame selection or merging methods may lose some key frames or misunderstand the temporal information of the video (Zhou et al., 2024b). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 ARCHITECTURE', 'after_section': None, 'context_after': '5 Vision EncoderProjection………CompressionModality Pre-fusion…Large Language ModelImageHigh-ResolutionImageVideoLanguage InstructionVisual Inputs<Instruction>:<Response>:fast inference with one vision token (40 milliseconds, 0.6MB/image)576 tokens / image 1 token / imageLanguage ResponseEmbeddingWhat are shown in this image/video?(a) Compression(b) Modality Pre-fusion………Transformer BlocksCross-Attentionattention 576 tokens learnablecompression queriesposition encodingvisiontext tokens Hq are concatenated and fed into the pre-fusion module, and the outputs corresponding to the text tokens are then extracted as fusion tokens, expressed as: ', 'paragraph_idx': 27, 'before_section': '4.1 ARCHITECTURE', 'context_before': 'visual information from all vision tokens in advance. Based on our previous observations, where this fusion stage occurs implicitly within the early layers of the LLM, the modality pre-fusion module f (·) consists of Nf usion Transformer blocks (Vaswani et al., 2017), where each Transformer block ', 'modified_lines': 'shares the same structure and hyperparameters with LLM backbone. Vision tokens Hv and text Published as a conference paper at ICLR 2025 ', 'original_lines': 'share the same structure and hyperparameters with LLM backbone. Vision tokens Hv and text Under review as a conference paper at ICLR 2025 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 27}, {'section': 'Abstract', 'after_section': None, 'context_after': '61.0 73.7 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '68.8 65.4 – ', 'modified_lines': '', 'original_lines': ' 50.8 58.8 61.6 – – 55.5 57.0 59.6 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'the Association for Computational Linguistics: Human Language Technologies, pp. 190–200, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL https: //aclanthology.org/P11-1020. ', 'modified_lines': '', 'original_lines': ' Joya Chen, Zhaoyang Lv, Shiwei Wu, Kevin Qinghong Lin, Chenan Song, Difei Gao, Jia-Wei Liu, Ziteng Gao, Dongxing Mao, and Mike Zheng Shou. Videollm-online: Online video large lan- guage model for streaming video. In CVPR, 2024. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'benchmark for multimodal large language models, 2024. URL https://arxiv.org/abs/ 2306.13394. ', 'modified_lines': '', 'original_lines': 'Yuxin Zhang Xiawu Zheng Xiaoshuai Sun Rongrong Ji Gen Luo, Yiyi Zhou. Feast your eyes: Mixture-of-resolution adaptation for multimodal large language models. arXiv preprint arXiv:2403.03003, 2024. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 4.0', 'after_section': None, 'context_after': 'for the comparison with SOTA token merging methods, including PruMerge (Shang et al., 2024), PruMerge++ (Shang et al., 2024), and MQT-LLaVA (Hu et al., 2024). Specifically, PruMerge ap- plies the widely-used token merge (ToMe) technique (Bolya et al., 2023) on ViT, PruMerge++ im- proves upon PruMerge by uniformly sampling additional vision tokens, and MQT-LLaVA employs Matryoshka representation learning to compress vision tokens. and MQT-LLaVA at the same compression rate, showing the advantages of query-based compres- sion. Methods ', 'paragraph_idx': 60, 'before_section': None, 'context_before': '62.1 64.9 ', 'modified_lines': 'token compression performance, we remove the modality pre-fusion module from LLaVA-Mini As shown in Table 10, LLaVA-Mini’s compression module outperforms PruMerge, PruMerge++, E.2 EFFECT OF MODALITY PRE-FUSION Table 11: Performance of LLaVA-Mini when using only pre-fusion module without compression. ', 'original_lines': 'To verify the effectiveness of the compression module, we compared the compression module in LLaVA-Mini with previous advanced token merging methods. To ensure a fair comparison of to- ken compression performance, we have removed the modality pre-fusion module from LLaVA-Mini As shown in the Table 11, LLaVA-Mini’s compression module outperforms PruMerge, PruMerge++, E.3 EFFECT OF MODALITY PRE-FUSION Table 12: Performance of LLaVA-Mini when using only pre-fusion module without compression. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'efficiency improvements brought by LLaVA-Mini are scalable across these hardware platforms. LLaVA-Mini significantly reduces the computational load of LMMs by decreasing the number of vision tokens. To further study the proportion of computational load contributed by each component F VISUALIZATION OF COMPRESSION ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'The efficiency improvements brought by LLaVA-Mini stem from reduced computational load (FLOPs), which is consistent across different hardware platforms. To demonstrate the scalabil- ity of model efficiency across different hardware platforms, we compute the inference latency of ', 'modified_lines': 'LLaVA-Mini on three hardware platforms: RTX 3090, A100, and A800. As shown in Table 13, the E.5 COMPUTATIONAL OVERHEAD OF EACH COMPONENT Table 14: Computational overhead (FLOPs) of each component in LLaVA-Mini. Methods Res. LLaVA-v1.5 LLaVA-Mini LLaVA-v1.5 LLaVA-Mini 336 336 672 672 Vision Encoder FLOPs (T) Projection Compression 0.349 0.349 1.745 1.745 0.024 0.024 0.121 0.121 - 0.001 - 0.009 Pre-fusion LLM Total - 0.125 - 1.183 8.177 1.460 38.623 4.131 8.55 1.96 40.49 7.19 in LLaVA-Mini, we compute the FLOPs of each module, as shown in Table 14. The proposed com- pression module and pre-fusion module incur minimal computational cost, while the computation required by the LLM backbone is significantly reduced. ', 'original_lines': 'LLaVA-Mini on three hardware platforms: RTX 3090, A100, and A800. As shown in Table ??, the E.6 COMPUTATIONAL OVERHEAD OF EACH COMPONENT in LLaVA-Mini, we compute the FLOPs of each module, as shown in the Table 15. The proposed compression module and pre-fusion module incur minimal computational cost, while the computa- tion required by the LLM backbone is significantly reduced. 22 Under review as a conference paper at ICLR 2025 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 Figure 12: Visualization of the cross-attention in the compression module introduced in LLaVA- Mini. The left side is the original image, and the right side is the cross-attention distribution heat map, where brighter areas are more heavily weighted during compression. The example images are all from the LLaVA-Bench-in-the-Wild benchmark. 23 Compression (cross-attention) in LLaVA-Mini(a)(b)(c)(d)(f)(g)(h)(i)(l)(m)(k)(j)(e) Under review as a conference paper at ICLR 2025 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 Table 15: Computational overhead (FLOPs) of each component in LLaVA-Mini. Methods Res. LLaVA-v1.5 LLaVA-Mini LLaVA-v1.5 LLaVA-Mini 336 336 672 672 Vision Encoder FLOPs(T) Projection Compression 0.349 0.349 1.745 1.745 0.024 0.024 0.121 0.121 - 0.001 - 0.009 Pre-fusion LLM Total - 0.125 - 1.183 8.177 1.460 38.623 4.131 8.55 1.96 40.49 7.19 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-01 12:49:45
ICLR.cc/2025/Conference
Fl4L3qj2cR
DFfBn9bxRV
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'results in performance degradation (Wang et al., 2024; Fan et al., 2024). In this paper, we aim to develop efficient LMMs ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': '(Yuan et al., 2024b), but often overlook another critical avenue: reducing the number of vision tokens to shorten the input context. Some token reduction methods rely on predefined rules to reduce the number of tokens output by the vision encoder (Bolya et al., 2023; Shang et al., 2024; Li et al., ', 'modified_lines': '2024e; Ye et al., 2024d; Hu et al., 2024), which leads to the loss of visual information and inevitably ', 'original_lines': '2024e; Ye et al., 2024c; Hu et al., 2024), which leads to the loss of visual information and inevitably ', 'after_paragraph_idx': 5, 'before_paragraph_idx': 5}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '3 ', 'paragraph_idx': 6, 'before_section': None, 'context_before': 'sizes and training datasets. Appendix A gives the formal expression of the preliminary analyses. Vision Tokens are More Important in Early Layers To find out which layers in LMM the vision ', 'modified_lines': 'tokens play a more important role, we measure the attention weights assigned to different token types ', 'original_lines': 'tokens play a more important role, we measure the attention weights assigned to different token ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.2 PRELIMINARY ANALYSES', 'after_section': '3.2 PRELIMINARY ANALYSES', 'context_after': 'Most Vision Tokens are Focused in Early Layers To further assess the importance of individual visual tokens, we calculate the entropy of the attention distribution at each layer. As shown in ', 'paragraph_idx': 18, 'before_section': None, 'context_before': 'Figure 3: Attention entropy assigned to different types of tokens across different layers in LMMs. ', 'modified_lines': '(including instruction, vision, and response) at each layer. As shown in Figure 2, Visual tokens receive more attention in the earlier layers, but this attention sharply decreases in the deeper layers, with over 80% of the attention being directed towards instruction tokens. This finding is consistent with the previous conclusion(Chen et al., 2024). This change in attention suggests that vision tokens play a central role in the early layers, with the instruction tokens seeking relevant visual information from vision tokens through attention mechanisms. In the later layers, the model relies more on instructions that have already fused the visual data to generate responses. ', 'original_lines': 'types (including instruction, vision, and response) at each layer. As shown in Figure 2, the attention assigned to vision tokens varies significantly across layers. Visual tokens receive more attention in the earlier layers, but this attention sharply decreases in the deeper layers, with over 80% of the attention being directed towards instruction tokens. This change in attention suggests that vision tokens play a central role in the early layers, with the instruction tokens seeking relevant visual information from vision tokens through attention mechanisms. In the later layers, the model relies more on instructions that have already fused the visual data to generate responses. ', 'after_paragraph_idx': 19, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'To further substantiate our finding that visual tokens are particularly critical in the early layers, we evaluated the visual understanding abil- ', 'paragraph_idx': 6, 'before_section': None, 'context_before': 'are crucial in the early layers, and reducing their quantity inevitably results in a loss of visual information. This explains why previous methods of direct token reduction will compromise visual understand- ', 'modified_lines': 'ing capabilities (Shang et al., 2024; Ye et al., 2024d; Hu et al., 2024). ', 'original_lines': 'ing capabilities (Shang et al., 2024; Ye et al., 2024c; Hu et al., 2024). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 ARCHITECTURE', 'after_section': '4.1 ARCHITECTURE', 'context_after': 'ˆHv = A · Hv, where A = Softmax ', 'paragraph_idx': 26, 'before_section': '4.1 ARCHITECTURE', 'context_before': 'ber of vision tokens fed into the LLM backbone by utilizing a query-based compression module. To learn compression of the vision tokens, LLaVA-Mini introduces C × C learnable compression queries Qv. These queries interact with all vision tokens Hv through cross-attention (Li et al., ', 'modified_lines': '2023a; Ye et al., 2024a), selectively extracting the important visual information to produce C × C compressed vision tokens ˆHv ∈ RC2×dh. To preserve the spatial information in the image dur- ing compression, we introduce a 2D sinusoidal positional encoding P E(·) (He et al., 2021) on the learnable queries and original vision tokens. Formally, the compression can be expressed as: ', 'original_lines': '2023a), selectively extracting the important visual information to produce C × C compressed vision tokens ˆHv ∈ RC2×dh. To preserve the spatial information in the image during compression, we introduce a 2D sinusoidal positional encoding P E(·) (He et al., 2021) on the learnable queries and original vision tokens. Formally, the compression can be expressed as: ', 'after_paragraph_idx': 26, 'before_paragraph_idx': 26}, {'section': 'Abstract', 'after_section': None, 'context_after': '65.0 66.8 67.6 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '– – ', 'modified_lines': '', 'original_lines': '50.8 58.8 61.6 – – 55.5 57.0 59.6 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 4.0', 'after_section': '1 4.0', 'context_after': 'sulting in a significant loss of visual information and negatively impacting visual understanding of LMMs. For instance, LLaMA-VID, VoCo-LLaVA, and MQT-LLaVA, which reduce vision tokens to 1-2 tokens, lead to 5% performance drop on average. In contrast, LLaVA-Mini employs modal- ', 'paragraph_idx': 60, 'before_section': '1 4.0', 'context_before': 'reported in Table 1, where LLaVA-Mini achieves performance comparable to LLaVA-v1.5 while using only 1 vision token instead of 576. Previous efficient LMMs with fewer vision tokens often ', 'modified_lines': 'merged similar tokens directly after the vision encoder (Shang et al., 2024; Ye et al., 2024d), re- ', 'original_lines': 'merged similar tokens directly after the vision encoder (Shang et al., 2024; Ye et al., 2024c), re- ', 'after_paragraph_idx': 60, 'before_paragraph_idx': 60}, {'section': '3.2 PRELIMINARY ANALYSES', 'after_section': None, 'context_after': 'over 40% (Shang et al., 2024). Notably, under the same FLOPs, increasing the number of pre-fusion layers yields greater benefits than increasing the number of compression vision tokens. This sup- ', 'paragraph_idx': 22, 'before_section': None, 'context_before': 'fects of modality pre-fusion, we conduct an ablation study in Table 6. Without pre-fusion, token compression results in a performance drop of around 5%, even with 144 vision tokens retained, the performance of LMMs falls short of LLaVA-v1.5. This also explains why previous token merging ', 'modified_lines': 'methods often exhibit poor performance (Ye et al., 2024d) or can only achieve a compression rate of ', 'original_lines': 'methods often exhibit poor performance (Ye et al., 2024c) or can only achieve a compression rate of ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Ruyang Liu, Chen Li, Yixiao Ge, Ying Shan, Thomas H Li, and Ge Li. One for all: Video conver- sation is feasible without video instruction tuning. arXiv preprint arXiv:2309.15785, 2023c. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Llava-next: Improved reasoning, ocr, and world knowledge, January 2024b. URL https:// llava-vl.github.io/blog/2024-01-30-llava-next/. ', 'modified_lines': '', 'original_lines': '14 Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Ar- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Zhende Song, Chenchen Wang, Jiamu Sheng, Chi Zhang, Gang Yu, Jiayuan Fan, and Tao Chen. Moviellm: Enhancing long video understanding with ai-generated movies, 2024b. ', 'modified_lines': '', 'original_lines': ' 15 Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.1 EXPERIMENTAL SETTING', 'after_section': None, 'context_after': 'improving computational efficiency. TokenPacker (Li et al., 2024e) is a visual projector that efficiently reduces visual tokens by 80% ', 'paragraph_idx': 42, 'before_section': None, 'context_before': 'respectively, with a total of two tokens representing each image, thus facilitating the understanding of longer videos. ', 'modified_lines': 'VoCo-LLaMA (Ye et al., 2024d) compresses all vision tokens using language models, significantly ', 'original_lines': 'VoCo-LLaMA (Ye et al., 2024c) compresses all vision tokens using language models, significantly ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-02 15:47:30
ICLR.cc/2025/Conference
DFfBn9bxRV
pyEB9MvAlY
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '3 ', 'paragraph_idx': 6, 'before_section': None, 'context_before': 'sizes and training datasets. Appendix A gives the formal expression of the preliminary analyses. Vision Tokens are More Important in Early Layers To find out which layers in LMM the vision ', 'modified_lines': 'tokens play a more important role, we measure the attention weights assigned to different token ', 'original_lines': 'tokens play a more important role, we measure the attention weights assigned to different token types ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.2 PRELIMINARY ANALYSES', 'after_section': '3.2 PRELIMINARY ANALYSES', 'context_after': 'Most Vision Tokens are Focused in Early Layers To further assess the importance of individual visual tokens, we calculate the entropy of the attention distribution at each layer. As shown in ', 'paragraph_idx': 18, 'before_section': None, 'context_before': 'Figure 3: Attention entropy assigned to different types of tokens across different layers in LMMs. ', 'modified_lines': 'types (including instruction, vision, and response) at each layer. As shown in Figure 2, Visual tokens receive more attention in the earlier layers, but this attention sharply decreases in the deeper layers, with over 80% of the attention being directed towards instruction tokens. This finding is consistent with the previous conclusion (Chen et al., 2024). This change in attention suggests that vision tokens play a central role in the early layers, with the instruction tokens seeking relevant visual information from vision tokens through attention mechanisms. In the later layers, the model relies more on instructions that have already fused the visual data to generate responses. ', 'original_lines': '(including instruction, vision, and response) at each layer. As shown in Figure 2, Visual tokens receive more attention in the earlier layers, but this attention sharply decreases in the deeper layers, with over 80% of the attention being directed towards instruction tokens. This finding is consistent with the previous conclusion(Chen et al., 2024). This change in attention suggests that vision tokens play a central role in the early layers, with the instruction tokens seeking relevant visual information from vision tokens through attention mechanisms. In the later layers, the model relies more on instructions that have already fused the visual data to generate responses. ', 'after_paragraph_idx': 19, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '61.0 73.7 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '68.8 65.4 – ', 'modified_lines': '', 'original_lines': ' 50.8 58.8 61.6 – – 55.5 57.0 59.6 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-02 15:55:28
ICLR.cc/2025/Conference
dtec2eqLss
8uyXsyi9Pq
[{'section': '2.1 MOTIVATION AND OVERALL FRAMEWORK', 'after_section': None, 'context_after': '7 Under review as a conference paper at ICLR 2025 First, one reason for aggregating dimensions at all is to reduce the computational complexity. Using individual dimensions for masking unit can be computationally expensive because self-attention in ', 'paragraph_idx': 10, 'before_section': None, 'context_before': '4.1 AGGREGATING SIMILAR DIMENSIONS FOR MASKING UNIT ', 'modified_lines': 'The clustering is the heart of the proposed method. For masked autoencoder, the unit we used for masking and predicting can be anything, e.g. individual dimension or patches formed by randomly selected dimensions. Why should similar dimensions be aggregated to form a masking unit? We provide both intuition and ablation experiments to answer this question. ', 'original_lines': 'The clustering is the heart of the proposed method. For masked autoencoder, the unit we used for masking and predicting can be anything, e.g. each individual dimension or patches formed by randomly selected dimensions. Why should similar dimensions be aggregated to form a masking unit? We provide both intuition and ablation experiments to answer this questions. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-25 22:40:29
ICLR.cc/2025/Conference
8uyXsyi9Pq
0AQqNQcfRC
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '2 ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': 'we test the proposed method on the synthesized biological visual dataset, derived from the CIFAR- 10 [38] using a foveated retinal sampling mechanism [12]. Then we generalize this method to two high-dimensional vector datasets: a primary visual cortex neural response decoding dataset [64] ', 'modified_lines': 'and the TCGA miRNA-based cancer classification dataset [69; 73]. Across all these benchmarks, ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 6}, {'section': 'Abstract', 'after_section': None, 'context_after': 'our proposed method outperforms existing SSL methods, demonstrating its effectiveness in building unsupervised representations for signals lacking explicit stationarity or topology. Given the emer- gence of new modalities in deep learning from natural sciences [66; 28; 55; 40? ? ? ], such as ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'clustering and self-organization before unsupervised learning using a masked autoencoder for signal reconstruction. ', 'modified_lines': '', 'original_lines': 'and the TCGA miRNA-based cancer classification dataset [69; 73]. Across all these benchmarks, ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Given a dataset S ∈ Rn×m, where each row denote a high dimensional data point. Let Si ∈ Rn×1 be ith column of S, which represents the ith dimension of the signal. Since we want to process dimensions that share similar information together, we use spectral clustering to group these ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'are detailed in the following subsections. 2.2 DENSITY ADJUSTED SPECTRAL CLUSTERING ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'The optimization problem becomes the following: min ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '214 215 ', 'modified_lines': '', 'original_lines': 'and shape of the clusters strongly affect the unsupervised learning performance. To adjust the size and shape, we apply a density adjustment matrix P to adjust L in the spectral embedding objective. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Transforming a high-dimensional signal into a sequence of clusters using the above method is not enough because it does not capture the internal structure within individual clusters. As an intuitive example, given an image, we divide it into a set of image patches of the same size. If we apply dif- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ablation study in section 4.3 and appendix A.4 2.3 SELF-ORGANIZING LAYER ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'After the self-organizing layer, z0 is passed to a Transformer-based masked autoencoder (MAE) with an unsupervised learning objective. Masked autoencoder (MAE) consists of an encoder and a decoder which both consist of stacked Transformer blocks introduced in Vaswani et al. [71]. The ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'introduced in the next subsection. 2.4 MASKED AUTOENCODER ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.1 SYNTHETIC BIOLOGICAL VISION DATASETAs discussed in the introduction, the biological visual signal serves as an ideal data modality to validate the capability of URLOST. In contrast to digital images captured by a fixed array of sensors, thebiological visual signal is acquired through irregularly positioned ganglion cells, inherently lackingexplicit topology and stationarity. However, it is hard to collect real-world biological vision signals', 'after_section': None, 'context_after': 'Experiments. We compare URLOST on both of the synthetic vision datasets as well as the original CIFAR-10 with popular unsupervised representation learning methods SimCLR [10] and MAE [31]. We also compared the ViT backbone used in URLOST to other backbones such as convolutional neural network (CNN). Moreover, since CNN works better with stationary signals, we further com- pared our methods to SimCLR with a graph neural network (GNN) backbone, which doesn’t rely on the stationarity assumption of the data. For the GNN baseline, we use Vision GNN (ViG), which is state of the art graph neural network on images proposed in [29]. After the model is trained without ', 'paragraph_idx': 15, 'before_section': None, 'context_before': 'spectral clustering results are shown. Each unique color represents a cluster, with each kernel col- ored according to its assigned cluster. ', 'modified_lines': 'Foveated CIFAR-10. Unlike cameras that use uniform photosensors with consistent sampling pat- terns, the human retina features ganglion cells whose receptive field sizes vary—smaller in the cen- tral fovea and larger in the periphery. This results in foveated imaging [70], enabling primates to have both high-resolution central vision and a wide overall receptive field. However, this distinctive sampling pattern causes visual signals in the retina to lack stationarity; the statistical properties differ between the center and periphery, with ganglion cell responses being highly correlated in the fovea but less correlated in the periphery. To mimic the foveated imaging with CIFAR-10, we adopt the retina sampling mechanism from Cheung et al. [12]. Like shown in Figure 3, each dot represents a retinal ganglion cell, which together form a non-stationary sampling lattice with irregular topology. Details on the implementation of foveation sampling is provided in Appendix A.3. ', 'original_lines': 'Foveated CIFAR-10. Much like photosensors installed in a camera, many layers of sensors in human retina sample and process visual stimuli. In general, retinal ganglion cells are considered as the last stage of processing layer before passing the information to the visual cortex. Unlike photosensors that have uniform receptive fields and adhere to a consistent sampling pattern, retinal ganglion cells at different locations of the retina vary in their receptive field size: smaller in the center (fovea) but larger in the peripheral of the retina. This distinctive retina sampling pattern results in foveated imaging [70]. It gives primates the ability to have both a high-resolution vision and a broad overall receptive field. However, visual signals sampled by the retina lack stationarity. Responses of two ganglion cells separated by the same displacement are highly correlated in the retina but less correlated in the peripheral. To mimic the foveated imaging with CIFAR-10, we adopt the retina sampling mechanism from Cheung et al. [12]. Specifically, each retinal ganglion cell is simplified and modeled using a Gaussian kernel. The dot product between pixel values and the Gaussian kernel determines the response of each cell. Figure 3 illustrates the sampling kernel locations. Applying this sampling grid and permuting the resulting pixels produces the foveated CIFAR-10. In the natural retina, retinal ganglion cell density decreases linearly with eccentricity, which makes fovea much denser than the peripheral, compared to the simulated lattice in Figure 3. However, considering the low resolution of the CIFAR-10 dataset, we reduce the simulated fovea’s density to prevent redundant sampling. For this dataset, we define the prior density q(i) as the distance from the ith sampling kernel to the center of the sampling lattice. 5 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 Under review as a conference paper at ICLR 2025 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 Table 1: Evaluation on computer vision and synthetic biological vision dataset. ViT (Patch) stands for the Vision Transformer backbone with image patches as inputs. ViT (Pixel) means pixels are treated as input units. ViT (Clusters) means clusters are treated as inputs instead of patches. The number of clusters is set to 64 for both Permuted CIFAR-10 and Foveated CIFAR-10 dataset. Eval Acc stands for linear probing evaluation accuracy. Dataset CIFAR-10 Permuted CIFAR-10 (no topology) Foveated CIFAR-10 (no topology or stationarity) Method MAE MAE SimCLR SimCLR URLOST MAE MAE MAE SimCLR SimCLR URLOST MAE MAE MAE SimCLR SimCLR Backbone ViT (Patch) ViT (Pixel) ResNet-18 ViG ViT (Cluster) ViT (Random Patch) ViT (Pixel) ResNet-18 ViG ViT (Cluster) ViT (Random Patch) ViT (Pixel) ResNet-18 ViG Eval Acc 88.3 % 56.7 % 90.7 % 53.8 % 86.4 % 55.7 % 56.7 % 47.9 % 40.0 % 85.4 % 51.1 % 48.5 % 38.0 % 42.8 % Table 2: Evaluation on V1 response decoding and TCGA pan-cancer classification tasks. “Raw” indicates preprocessed (standardized and normalized) raw signals. Best β values are used for β-VAE. For URLOST MAE, cluster sizes are 200 (V1) and 32 (TCGA). We pick 15 seeds ran- domly to repeat the training and evaluation for each method. For the MAE baseline, we randomly group dimensions instead of using spectral clusterings. We report the 95% confidence interval for all methods. Method Raw MAE β-VAE URLOST MAE V1 Response Decoding Acc TCGA Classification Acc 73.9% ± 0.00 % 70.6% ± 0.22 % 75.64% ± 0.11% 78.75% ± 0.18 % 91.7 ± 0.24% 90.6% ± 0.63% 94.15% ± 0.24% 94.90% ± 0.25 % ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.1 MOTIVATION AND OVERALL FRAMEWORK', 'after_section': None, 'context_after': '7 Under review as a conference paper at ICLR 2025 4.2 SELF-ORGANIZING LAYER VS SHARED PROJECTION LAYER ', 'paragraph_idx': 10, 'before_section': None, 'context_before': '4.1 AGGREGATING SIMILAR DIMENSIONS FOR MASKING UNIT ', 'modified_lines': 'The clustering is the heart of the proposed method. Why should similar dimensions be aggregated to form a masking unit? We provide both intuition and ablation experiments to answer this question. First, aggregating dimensions create a nontrivial unsupervised learning task for the model. Making individual dimension will lead to trivial solutions. To fill in the missing value of a dimension, the model tends to learn to use low-level information such as the averaged value of similar dimensions. But why aggregating similar dimensions instead of random dimensions? We assume similar dimen- sions are used to sampling similar region of the underlying signals. Clustering of similar dimensions contains underlying patterns. Model learn these patterns to form a good representation through the mask-prediction task. For example, for the CIFAR10 example, similar pixels tend to sample seman- tically similar regions, which together form image patches. For V1 data, each neurons code simple pattern like oriented edge but neurons together code patterns like shape and contours. Predicting at the shape level is a non-trival SSL task. We also provide experiments for using different masking units, as shown in table 3, aggregating similar dimensions significantly improves the performance. 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 ', 'original_lines': 'The clustering is the heart of the proposed method. For masked autoencoder, the unit we used for masking and predicting can be anything, e.g. individual dimension or patches formed by randomly selected dimensions. Why should similar dimensions be aggregated to form a masking unit? We provide both intuition and ablation experiments to answer this question. First, one reason for aggregating dimensions at all is to reduce the computational complexity. Using individual dimensions for masking unit can be computationally expensive because self-attention in transformer scales with the number of masking units. Moreover, another reason, more importantly, is to create a nontrivial unsupervised learning task for the model. If we don’t aggregate dimensions at all, the task becomes predicting random missing dimensions. This will lead to trivial solutions. To fill in the missing value of a dimension, the model tends to learn to use the averaged value of similar dimensions. In other words, the model learns to solve the task using only low-level information. Why aggregating similar dimensions instead of aggregating random dimensions? We assume similar dimensions are used to sampling similar region of the underlying signals. Clustering of similar dimensions contains underlying patterns. In order to predict the masked clusters from unmasked clusters, the model needs to learn the underlying patterns from each cluster. For example, for the CIFAR10 example, each dimension of the signal represents a pixel. Similar pixels tend to sample semantically similar regions in the image due to its locality, which together form image patches. For V1 data, similar neurons likely share a similar receptive field. Each v1 neuron codes a simple pattern like oriented edge in the receptive field. Similar neurons together could code patterns like shape and contours. Masking and predicting at the shape and contours level is a non-trival SSL task. We also provide experiments for using different masking units, individual dimension, patch formed with random dimension, and cluster used in URLOST, as shown in table 3, aggregating similar dimensions for masking unit significantly improves the performance. Dataset Masking Unit Performance Permuted Cifar10 Foveated Cifar10 TCGA Gene V1 Response Clusters (URLOST) Random patch Individual dimension Clusters (URLOST) Random patch MAE (pixel) Clusters (URLOST) Random patch Individual dimension Clusters (URLOST) Random patch Individual dimension 86.4% 55.7% 56.7% 85.4% 51.1% 48.5% 94.9% 91.7% 88.3% 78.8% 73.9% 64.8% Table 3: Performance of different datasets using different masking units. “Clusters (URLOST)” refers to the cluster formed using pairwise mutual information and spectral clustering. “Random patch” refers to patch formed by aggregating random dimensions. “Individual dimension” referes to we use individual dimension as masking unit, without any aggregation. The model is trained for the mask prediction task, and the probing accuracy is reported in the table. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'the self-organizing layers both quantitatively and qualitatively. To facilitate the ablation, we further synthesized another dataset. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'considered as a projection layer that is shared among all elements in the input sequence. The self- organizing layer g(·, w(i)) introduced in Section 2.3 can be considered as a non-shared projection layer. We conducted an ablation study comparing the two designs to demonstrate the effectiveness of ', 'modified_lines': '', 'original_lines': ' 8 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.2 SELF-ORGANIZING LAYER VS SHARED PROJECTION LAYER', 'after_section': None, 'context_after': '4.3 DENSITY ADJUSTED CLUSTERING VS UNIFORM DENSITY CLUSTERING ', 'paragraph_idx': 37, 'before_section': '4.2 SELF-ORGANIZING LAYER VS SHARED PROJECTION LAYER', 'context_before': 'learning method to recover topology and enforce stationary on the signal. Note that Figure 4 shows that the self-organizing layer learns to “undo” the permutation. Additionally, the self-organizing layer also does some extra regular transformation to each patch. It is likely that the self-organizing ', 'modified_lines': 'layer learns to encode position information as transformations of some simple group structures. ', 'original_lines': 'layer learns to encode position information as transformations (group action) of some simple group structures. Figure 4: Learnt weights of a self-organizing layer. (A) Image is cropped into patches, where each patch x(i) first undergoes a different permutation E(i), then the inverse permutation E(i)T . (B) The learned weight of the linear self-organizing layer. The 12th column of W (i) at all positions i are reshaped into patches and visualized. When W (i) undergoes the inverse permutation E(i)T , they show similar patterns. (C) Visualization of the 37th column of W (i). Similar to (B). 9 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 Under review as a conference paper at ICLR 2025 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 37}, {'section': '5 ADDITIONAL RELATED WORKS', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2025 Learning with signal on non-euclidean geometry. In recent years, researchers from the machine learning community have made efforts to consider geometries and special structures beyond classic ', 'paragraph_idx': 45, 'before_section': '5 ADDITIONAL RELATED WORKS', 'context_before': 'Topology in biological visual signal. 2-D topology of natural images is strong prior that requires many bits to encode [17; 2]. Such 2-D topology is encoded in the natural image statistic [62; 34]. ', 'modified_lines': 'Kohonen in 1982 first come up with self-orgnaizing map (SOM). The algorithm produces a low- dimensional representation of a higher-dimensional dataset while preserving the topological struc- ture of the data [37]. It is also motivated to solve the “unscramble” pixels problem by “descram- bling” by mapping pixels into a 2D index set by leveraging the mutual information between pixels at different locations. More detail in Appendix A.12. [59] tackles the same problem with manifold learning. The community tried to feed the “recovered topology” to a graph neural network (GNN) [5], but suffered from inherent scalability issues on using GNN to do unsupervised learning. Optic and neural circuits in the retina result in a more irregular 2-D topology than the natural image, which can still be simulated [58; 52; 53; 51; 68; 36]. This information is further processed by the primary visual cortex. Evidence of retinotopy suggests the low-dimensional geometry of visual input from 9 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 retina is encoded by the neuron in primary visual cortex [46; 24; 33; 23; 72; 54]. These study provide evidence that topology under retinal ganglion cell and V1 neurons can be recovered. Evidence of self-organizing mechanism in the brain. In neuroscience, many works use the self- organizing maps (SOM) as a computational model for V1 functional organization [20; 67; 1; 21; 45; 37]. In other words, this idea of self-organizing is a principle governing how the brain performs com- putations. Even though V1 functional organizations are present at birth, numerous studies indicate that the brain’s self-organizing mechanisms continue after full development [27; 60; 35]. ', 'original_lines': 'Kohonen in 1982 presents a fascinating thought experiment [37]: “Suppose you woke up one day to find someone rewired your optic nerve (or you have been implanted with a prosthetic retina). The signals from the retina to the brain are intact, but the wires are all mixed up, projecting to the wrong places. Can the brain learn to “descramble” the image?” Kohonen proposes the “self-organizing map” (SOM) algorithm to address this problem. SOM is an unsupervised machine learning tech- nique used to produce a low-dimensional (typically two-dimensional) representation of a higher- dimensional dataset while preserving the topological structure of the data. The parameters of SOM include a set of neurons, where each neuron has a location index. Let Wv denote the weight vector of the vth neuron and D the input vector. θ(u, v, s) is the interaction function that determines the influence between the uth and vth neurons based on their distance. The update equation for SOM is given by: Wv(s + 1) = Wv(s) + θ(u, v, s) · α(s) · (D(t) − Wv(s)) In the task of “descrambling” pixels, the input consists of pixel values. The indices will be two- dimensional, such as s = (i, j), and Wv(s) will learn to represent a set of “descrambled” pixels, where s represents the correct indices of the pixels. In other words, the index set defines the correct topology of the image as a 2D grid, and SOM maps the pixels onto this 2D topology by leveraging the mutual information between pixels at different locations. Other methods such as Rouxet al. [59] 10 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 use manifold learning to uncovered topology 2-d topology from natural images. However, when the intrinsic dimension of the manifold is larger than 2d, it is difficult to integrate the ”uncovered” topology with state-of-the-art self-supervised learning algorithms. The community tried to feed the “recovered topology” to a graph neural network (GNN) [5], but suffered from inherent scalability issues on using GNN to do unsupervised learning. Optic and neural circuits in the retina result in a more irregular 2-D topology than the natural image, which can still be simulated [58; 52; 53; 51; 68; 36]. This information is further processed by the primary visual cortex. Evidence of retinotopy suggests the low-dimensional geometry of visual input from retina is encoded by the neuron in primary visual cortex [46; 24; 33; 23; 72; 54]. These evidences suggest we can recover the topology using signal from retinal ganglion cell and V1 neurons. Evidence of self-organizing mechanism in the brain. In computational neuroscience, many works use the self-organizing maps (SOM) as a computational model for V1 functional organi- zation [20; 67; 1; 21; 45; 37]. In other words, this idea of self-organizing is likely a principle gov- erning how the brain performs computations. Even though V1 functional organizations are present at birth, numerous studies also indicate that the brain’s self-organizing mechanisms continue after full development [27; 60; 35]. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 45}, {'section': '5 ADDITIONAL RELATED WORKS', 'after_section': '5 ADDITIONAL RELATED WORKS', 'context_after': 'Self-supervised learning. Self-supervised learning (SSL) has made substantial progress in recent years. Different SSL method is designed for each modality, for example: predicting the masked/next ', 'paragraph_idx': 47, 'before_section': '5 ADDITIONAL RELATED WORKS', 'context_before': 'development in [29] shows this direction is prominent. They successfully scales GNN to ImageNet. We compared their proposed neural network architecture with the ViT backbone used in URLOST. Recent research also explores adapting the Transformer to domains beyond Euclidean spaces such ', 'modified_lines': 'as [14; 13; 26]. Ma et al. [39] treats an image as a set of points but relies on 2D coordinates. URLOST employs a single mutual information graph to define the topology of the high-dimensional signal. Gao et al. [26], on the other hand, is designed to handle graph data, where each data point corresponds to a distinct graph. It segments a graph into ”subgraphs,” processes them with a GNN, and then passes the output to a transformer. This approach is undeniably more flexible but requires all subgraphs to be globally aligned. Furthermore, the self-organizing layer in URLOST generalizes the ”patch resizer” mechanism from FlexiViT used in Beyer et al. [3] . ', 'original_lines': 'as [14; 13; 26]. Ma et al. [39] treats an image as a set of points but relies on 2D coordinates. In contrast, URLOST employs a single mutual information graph to define the topology of the high- dimensional signal. Gao et al. [26], on the other hand, is designed to handle graph data, where each data point corresponds to a distinct graph. It segments a graph into ”subgraphs,” processes them with a GNN, and then passes the output to a transformer. This approach is undeniably more flexible. However, since the GNN requires all subgraphs to be globally aligned, scalability may be hindered. A promising direction could involve combining Patch VIT and URLOST by replacing the self-organizing layer with a GNN. Furthermore, the self-organizing layer used in Beyer et al. [3] generalizes the ”patch resizer” mechanism from FlexiViT. FlexiViT is specifically designed to enable ViT to handle sequences with varying patch sizes. In FlexiViT, the parameters of the ”patch resizer” are determined via a local objective, whereas the parameters of the self-organizing layer are optimized jointly with all other network parameters for an end-to-end objective. ', 'after_paragraph_idx': 48, 'before_paragraph_idx': 47}, {'section': '5 ADDITIONAL RELATED WORKS', 'after_section': None, 'context_after': '6 DISCUSSION The success of most current state-of-the-art self-supervised representation learning methods relies on the assumption that the data has known stationarity and domain topology, such as the grid-like RGB images and time sequences. However, biological vision systems have evolved to deal with signals ', 'paragraph_idx': 48, 'before_section': '5 ADDITIONAL RELATED WORKS', 'context_before': 'image pairs in computer vision [42; 31; 75; 10; 30; 77]. These SSL methods have demonstrated descent scalability with a vast amount of unlabeled data and have shown their power by achieving performance on par with or even surpassing supervised methods. They have also exhibited huge ', 'modified_lines': 'potential in cross-modal learning, such as the CLIP by Radford et al. [57]. ', 'original_lines': 'potential in cross-modal learning, such as the CLIP by Radford et al. [57]. However, we argue that these SSL methods are all built upon specific modalities with explicit topology and stationarity which URLOST goes beyond. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 48}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2025 The density function p(x) on the manifold is analogous to the density adjustment matrix in equa- tion 1. Standard approaches in equation 5 assume that nodes are uniformly distributed on the man- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '808 809 ', 'modified_lines': '', 'original_lines': '[46] Haluk Ogmen and Michael H Herzog. The geometry of visual perception: Retinotopic and nonretinotopic representations in the human visual system. Proceedings of the IEEE, 98(3): 479–492, 2010. [47] Bruno A Olshausen and David J Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–609, 1996. [48] Bruno A Olshausen and David J Field. How close are we to understanding v1? Neural computation, 17(8):1665–1699, 2005. [49] Bruno A Olshausen and David J Field. What is the other 85 percent of v1 doing. L. van Hemmen, & T. Sejnowski (Eds.), 23:182–211, 2006. [50] Marius Pachitariu, Carsen Stringer, Sylvia Schr¨oder, Mario Dipoppa, L Federico Rossi, Matteo Carandini, and Kenneth D Harris. Suite2p: beyond 10,000 neurons with standard two-photon microscopy. BioRxiv, pp. 061507, 2016. [51] Eli Peli, Jian Yang, and Robert B Goldstein. Image invariance with changes in size: The role of peripheral contrast thresholds. JOSA A, 8(11):1762–1774, 1991. [52] Jeffrey S Perry and Wilson S Geisler. Gaze-contingent real-time simulation of arbitrary visual fields. In Human vision and electronic imaging VII, volume 4662, pp. 57–69. SPIE, 2002. [53] JS Pointer and RF Hess. The contrast sensitivity gradient across the human visual field: With emphasis on the low spatial frequency range. Vision research, 29(9):1133–1151, 1989. [54] Jonathan R Polimeni, Bruce Fischl, Douglas N Greve, and Lawrence L Wald. Laminar analysis of 7 t bold using an imposed spatial activation pattern in human v1. Neuroimage, 52(4):1334– 1346, 2010. [55] Daniel Probst and Jean-Louis Reymond. Visualization of very large high-dimensional data sets as minimum spanning trees. Journal of Cheminformatics, 12(1):12, 2020. doi: 10.1186/ s13321-020-0416-x. [56] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. 2018. [57] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agar- wal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable In International conference on machine visual models from natural language supervision. learning, pp. 8748–8763. PMLR, 2021. [58] Austin Roorda and David R Williams. The arrangement of the three cone classes in the living human eye. Nature, 397(6719):520–522, 1999. [59] Nicolas Roux, Yoshua Bengio, Pascal Lamblin, Marc Joliveau, and Bal´azs K´egl. Learning the 2-d topology of images. In J. Platt, D. Koller, Y. Singer, and S. Roweis (eds.), Advances in Neural Information Processing Systems, volume 20. Curran Associates, Inc., 2007. [60] Rosanna P Sammons and Tara Keck. Adult plasticity and cortical reorganization after periph- eral lesions. Current Opinion in Neurobiology, 35:136–141, 2015. [61] Han Shi, Jiahui Gao, Hang Xu, Xiaodan Liang, Zhenguo Li, Lingpeng Kong, Stephen Lee, and James T Kwok. Revisiting over-smoothing in bert from the perspective of graph. arXiv preprint arXiv:2202.08625, 2022. [62] Eero P Simoncelli and Bruno A Olshausen. Natural image statistics and neural representation. Annual review of neuroscience, 24(1):1193–1216, 2001. [63] X Yu Stella and Jianbo Shi. Multiclass spectral clustering. In Computer Vision, IEEE Interna- tional Conference on, volume 2, pp. 313–313. IEEE Computer Society, 2003. [64] Carsen Stringer, Marius Pachitariu, Matteo Carandini, and Kenneth Harris. Recordings of 10,000 neurons in visual cortex in response to 2,800 natural images. Figshare Repos, 2018. 15 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 [65] Carsen Stringer, Marius Pachitariu, Nicholas Steinmetz, Matteo Carandini, and Kenneth D Harris. High-dimensional geometry of population responses in visual cortex. Nature, 571 (7765):361–365, July 2019. [66] Carsen Stringer, Michalis Michaelos, Dmitri Tsyboulski, Sarah E. Lindo, and Marius Pachi- tariu. High-precision coding in visual cortex. Cell, 184(10):2767–2778.e15, 2021. ISSN 0092-8674. doi: https://doi.org/10.1016/j.cell.2021.03.042. [67] Nicholas V. Swindale and Hans-Ulrich Bauer. Application of kohonen’s self-organizing fea- ture map algorithm to cortical maps of orientation and direction preference. Proceedings: Biological Sciences, 265(1398):827–838, 1998. ISSN 09628452. [68] Larry N Thibos. Acuity perimetry and the sampling theory of visual resolution. Optometry and vision science: official publication of the American Academy of Optometry, 75(6):399– 406, 1998. [69] Katarzyna Tomczak, Patrycja Czerwi´nska, and Maciej Wiznerowicz. Review the can- cer genome atlas (tcga): an immeasurable source of knowledge. Contemporary Oncol- ogy/Wsp´ołczesna Onkologia, 2015(1):68–77, 2015. [70] David C Van Essen, Charles H Anderson, and Daniel J Felleman. Information processing in the primate visual system: an integrated systems perspective. Science, 255(5043):419–423, 1992. [71] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. [72] Brian A Wandell, Serge O Dumoulin, and Alyssa A Brewer. Visual field maps in human cortex. Neuron, 56(2):366–383, 2007. [73] John N Weinstein, Eric A Collisson, Gordon B Mills, Kenna R Shaw, Brad A Ozenberger, Kyle Ellrott, Ilya Shmulevich, Chris Sander, and Joshua M Stuart. The cancer genome atlas pan-cancer analysis project. Nature genetics, 45(10):1113–1120, 2013. [74] Rachel OL Wong. Retinal waves and visual system development. Annual review of neuro- science, 22(1):29–47, 1999. [75] Zhirong Wu, Yuanjun Xiong, Stella X. Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. [76] Youzheng Xu, Yixin Xu, Chun Wang, Baoguo Xia, Qingling Mu, Shaohong Luan, and Jun Fan. Mining tcga database for gene expression in ovarian serous cystadenocarcinoma mi- croenvironment. PeerJ, 9:e11375, 2021. [77] Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, and St´ephane Deny. Barlow twins: Self- supervised learning via redundancy reduction. In International Conference on Machine Learn- ing, pp. 12310–12320. PMLR, 2021. [78] Xiaoyu Zhang, Jingqing Zhang, Kai Sun, Xian Yang, Chengliang Dai, and Yike Guo. Inte- grated multi-omics analysis using variational autoencoders: Application to pan-cancer classi- fication. In 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 765–769, 2019. doi: 10.1109/BIBM47256.2019.8983228. [79] Xiaoyu Zhang, Yuting Xing, Kai Sun, and Yike Guo. Omiembed: A unified multi-task deep ISSN 2072-6694. doi: learning framework for multi-omics data. Cancers, 13(12), 2021. 10.3390/cancers13123047. 16 Under review as a conference paper at ICLR 2025 A APPENDIX A.1 MOTIVATION OF DENSITY ADJUSTED SPECTRAL CLUSTERING Using the terminologies in functional analysis, the mutual information graph defined in section 2.2 corresponds to a compact Riemannian manifold M and the Laplacian matrix L is a discrete anal- ogous to the Laplace Beltrami operator L on M. Minimizing the spectral embedding objective tr(Y LY T ) directly corresponds to the following optimization problem in function space: min ||f ||L2 (M) (cid:90) M ||∇f ||2dλ (5) where f (x) : M → [0, 1] is the normalized signal defined on M. We particularly want to write out the continuous form of spectral embedding so we can adapt it to non-stationary signals. To do so, we assume the measure λ is absolutely continuous with respect to standard measure µ. By apply the Radon-Nikodym derivative to equation 5, we get: (cid:90) (cid:90) ||∇f ||2dλ = M M ||∇f ||2 dλ dµ dµ where the quantity dλ p(x) = dλ dµ , we can rewrite the optimization problem as the following: dµ is called the Radon-Nikodym, which is some form of density function. Let min ||f ||L2(M) (cid:90) M ||p(x) 1 2 ∇f (x)||2dµ (6) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Some other interpretation of spectral embedding allows one to design a specific clustering algorithm in step 4. For example, [63] interprets the eigenvector problem in 6 as a relaxed continuous version ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '4. Treating each row of Y as a point in Rk, cluster them into k clusters via K-means or other algorithms. ', 'modified_lines': '', 'original_lines': ' 17 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'As shown in Figure 8, the kernel at the center is much smaller in size than the kernel in the peripheral. This makes the kernel at the center more accurate but smaller, which means it summarizes less ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'We provide further intuition and visualization on why density-adjusted spectral clustering allows the model to learn a better representation of the foveated CIFAR-10 dataset. ', 'modified_lines': '', 'original_lines': ' 18 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 Under review as a conference paper at ICLR 2025 Table 5: Evaluation on foveated CIFAR-10 with varying hyperparameter for density function. For each set of values of α and β, we perform density-adjusted spectral clustering and run URLOST with the corresponding cluster. The evaluation of each trial is provided in the table. beta = 0 beta = 2 alpha = 0 alpha = 0.5 alpha = 1.0 82.74 % 84.24 % 84.52 % 85.43 % 83.83 % 81.62 % ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'A.5 SELF-ORGANIZING LAYER LEARNS INVERSE PERMUTATION ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Result of uniform density spectral clustering (α = 0, β = 0). Each cluster has a similar number of elements in them but the clusters in the center are much smaller than the clusters in the periphery. ', 'modified_lines': '', 'original_lines': '19 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 Under review as a conference paper at ICLR 2025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'for all i, j, k. Since the above equation holds for all ek, by linearity and the property of permutation matrix, we have: ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'g(E(i)ek, W (i)) = W (i)E(i)ek g(E(j)ek, W (j)) = W (j)E(j)ek ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'big missing area on the graph, making the unsupervised learning task too difficult. On the other hand, if the cluster is too small, for example, if the cluster shrinks down to one pixel for an image, then the prediction tasks become too easy. The solution is simply doing a low-pass filter on the ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Intuition on hyperparameter selection: Both these hyperparameters are related to the size of each cluster, which relates to how difficult the task is. If a cluster is too big, then we need to predict a ', 'modified_lines': '', 'original_lines': ' 20 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Table 6: Ratio between Parameters of Self-Organizing Layers and Parameters of the Entire Model. We report the model used for each dataset. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '1132 1133 ', 'modified_lines': '', 'original_lines': ' Under review as a conference paper at ICLR 2025 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'A.8 TOPOLOGY AND STATIONARITY ', 'paragraph_idx': 6, 'before_section': None, 'context_before': '6.7% 15.6% 5.9% ', 'modified_lines': ' classification dataset. As shown in Figure 7, the model is not sensitive for most of the hyper- parameters. For a large range of hyperparameters, the model performance stays the same. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-26 19:07:14
ICLR.cc/2025/Conference
0AQqNQcfRC
x9gzhruVVl
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'videos, time series, and point clouds. However, these methods make implicit assumptions about the data domain’s topology and stationarity. We provide formal definition of both concept in Ap- pendix A.8 in the context our this work. We also provide a high level idea of these two concept: Topology refers to the low-dimensional structure arised from physical measurements, such as the pixel grids in images, the temporal structures in time series and text sequences, or the 3D structures in images are invariant to their spatial locations. A vase is still a vase no matter placed at the corner or the center of the image. The success of state-of-the-art self-supervised representation learning (SSL) methods largely de- pends on these two crucial assumptions. For example, in computer vision, popular SSL techniques, construction of image patches. Both approaches typically rely on convolutional neural networks convolutional filters or linear layers, which inherently exploit the stationarity and regular topology present in natural images. The geometric deep learning community has made significant efforts to extend machine learning to domains beyond those with regular topology. However, graph neural network (GNN)-based methods empirically struggle to scale with self-supervised learning objec- 1 ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'data and make these patterns readily apparent through specific representations. Over the past few years, there has been tremendous progress in the unsupervised representation learning community. Popular methods are self-supervised representation learning (SSL) method like contrastive learning ', 'modified_lines': 'and masked autoencoding [78; 11; 32; 80] work well on typical data modalities such as images, in molecules and point clouds [6]. Stationarity refers to the characteristic that the statistical proper- ties of the signal are invariant across its domain [5]. For instance, the statistics of pixels and patches such as Masked Autoencoders [33] and joint-embedding methods like SimCLR [11] require the (CNNs) or Vision Transformer (ViT) backbones [21]. These backbones consist of shared-weight tives and large datasets [63; 9; 8]. Methods that do scale well with large data still assume a minimal level of stationarity and regular topology [31]. ', 'original_lines': 'and masked autoencoding [75; 10; 30; 77] work well on typical data modalities such as images, in molecules and point clouds [5]. Stationarity refers to the characteristic that the statistical proper- ties of the signal are invariant across its domain [4]. For instance, the statistics of pixels and patches such as Masked Autoencoders [31] and joint-embedding methods like SimCLR [10] require the (CNNs) or Vision Transformer (ViT) backbones [19]. These backbones consist of shared-weight tives and large datasets [61; 8; 7]. Methods that do scale well with large data still assume a minimal level of stationarity and regular topology [29]. ', 'after_paragraph_idx': 3, 'before_paragraph_idx': 3}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'This motivates us to build unsupervised representations without relying on the prior stationarity of the raw signal or the topology of the input domain. The ability to build unsupervised representations without relying on topology and stationarity has huge advantages. For example, it allows humans to ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'regularity. Unlike the uniform grid in camera sensors, the cones and rods in the retina are distributed unevenly and non-uniformly. Yet, biological visual systems can establish a precise retinotopy map from the retina to neurons in visual cortex based on spontaneous, locally-propagated retinal activities ', 'modified_lines': 'and external stimuli [77; 43; 24] and leverage retinotopic input to build unsupervised representations. ', 'original_lines': 'and external stimuli [74; 41; 22] and leverage retinotopic input to build unsupervised representations. ', 'after_paragraph_idx': 5, 'before_paragraph_idx': 5}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'ods don’t. In this work, we aim to build unsupervised representations for general high-dimensional data and ', 'paragraph_idx': 7, 'before_section': '1 INTRODUCTION', 'context_before': 'Figure 1: From left to right: the unsupervised representation learning through joint embedding and masked auto-encoding; the biological vision system that perceives via unstructured sensor and un- ', 'modified_lines': 'derstands signal without stationarity or topology [56]; and many more such diverse high dimensional signal in natural science [52; 79] that our method supports while most existing unsupervised meth- ', 'original_lines': 'derstands signal without stationarity or topology [54]; and many more such diverse high dimensional signal in natural science [50; 76] that our method supports while most existing unsupervised meth- ', 'after_paragraph_idx': 7, 'before_paragraph_idx': 7}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'pixels, which recovers a coarse topology of the input domain. These clusters are analogous to image patches except that they are slightly irregularly shaped and different in size. We mask a proportion the remaining unmasked ones. This “learning to predict masked tokens” approach is proposed in we test the proposed method on the synthesized biological visual dataset, derived from the CIFAR- 2 ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': 'in an unsupervised fashion without knowledge of the shuffling order? If possible, can we use such a method to build unsupervised representations for general high-dimensional data? We could figure out the neighbors of each pixel by use low-level statistics of among these pixels over many images. ', 'modified_lines': 'Inspired by Roux et al. [61], we use low-level statistics and spectral clustering to form clusters of the of these “patches” and utilize a Vision Transformer [21] to predict the masked “patches” based on masked autoencoders (MAE) [33] and has demonstrated effectiveness on typical modalities. Firstly, 10 [40] using a foveated retinal sampling mechanism [13]. Then we generalize this method to two high-dimensional vector datasets: a primary visual cortex neural response decoding dataset [66] and the TCGA miRNA-based cancer classification dataset [71; 75]. Across all these benchmarks, ', 'original_lines': 'Inspired by Roux et al. [59], we use low-level statistics and spectral clustering to form clusters of the of these “patches” and utilize a Vision Transformer [19] to predict the masked “patches” based on masked autoencoders (MAE) [31] and has demonstrated effectiveness on typical modalities. Firstly, 10 [38] using a foveated retinal sampling mechanism [12]. Then we generalize this method to two high-dimensional vector datasets: a primary visual cortex neural response decoding dataset [64] and the TCGA miRNA-based cancer classification dataset [69; 73]. Across all these benchmarks, ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 6}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'chemistry, biology, and neuroscience, our method offers a promising approach in the effort to build unsupervised representations for high-dimensional data. ', 'paragraph_idx': 8, 'before_section': '1 INTRODUCTION', 'context_before': 'our proposed method outperforms existing SSL methods, demonstrating its effectiveness in building unsupervised representations for signals lacking explicit stationarity or topology. Given the emer- ', 'modified_lines': 'gence of new modalities in deep learning from natural sciences [68; 30; 57; 42; 76; 16; 2], such as ', 'original_lines': 'gence of new modalities in deep learning from natural sciences [66; 28; 55; 40? ? ? ], such as ', 'after_paragraph_idx': 8, 'before_paragraph_idx': 8}, {'section': '3.1 SYNTHETIC BIOLOGICAL VISION DATASETAs discussed in the introduction, the biological visual signal serves as an ideal data modality to validate the capability of URLOST. In contrast to digital images captured by a fixed array of sensors, thebiological visual signal is acquired through irregularly positioned ganglion cells, inherently lackingexplicit topology and stationarity. However, it is hard to collect real-world biological vision signals', 'after_section': '3.1 SYNTHETIC BIOLOGICAL VISION DATASETAs discussed in the introduction, the biological visual signal serves as an ideal data modality to validate the capability of URLOST. In contrast to digital images captured by a fixed array of sensors, thebiological visual signal is acquired through irregularly positioned ganglion cells, inherently lackingexplicit topology and stationarity. However, it is hard to collect real-world biological vision signals', 'context_after': 'have both high-resolution central vision and a wide overall receptive field. However, this distinctive sampling pattern causes visual signals in the retina to lack stationarity; the statistical properties differ between the center and periphery, with ganglion cell responses being highly correlated in the fovea but less correlated in the periphery. To mimic the foveated imaging with CIFAR-10, we adopt the retinal ganglion cell, which together form a non-stationary sampling lattice with irregular topology. Details on the implementation of foveation sampling is provided in Appendix A.3. Experiments. We compare URLOST on both of the synthetic vision datasets as well as the original We also compared the ViT backbone used in URLOST to other backbones such as convolutional neural network (CNN). Moreover, since CNN works better with stationary signals, we further com- pared our methods to SimCLR with a graph neural network (GNN) backbone, which doesn’t rely on the stationarity assumption of the data. For the GNN baseline, we use Vision GNN (ViG), which is supervision, we use linear probing to evaluate it. This is achieved by training a linear classifier on top of the pre-trained model with the given labels. The evaluations are reported in Table 1. SimCLR excels on CIFAR-10 but struggles with both synthetic datasets due to its inability to handle data ', 'paragraph_idx': 17, 'before_section': '3.1 SYNTHETIC BIOLOGICAL VISION DATASETAs discussed in the introduction, the biological visual signal serves as an ideal data modality to validate the capability of URLOST. In contrast to digital images captured by a fixed array of sensors, thebiological visual signal is acquired through irregularly positioned ganglion cells, inherently lackingexplicit topology and stationarity. However, it is hard to collect real-world biological vision signals', 'context_before': 'Foveated CIFAR-10. Unlike cameras that use uniform photosensors with consistent sampling pat- terns, the human retina features ganglion cells whose receptive field sizes vary—smaller in the cen- ', 'modified_lines': 'tral fovea and larger in the periphery. This results in foveated imaging [72], enabling primates to retina sampling mechanism from Cheung et al. [13]. Like shown in Figure 3, each dot represents a CIFAR-10 with popular unsupervised representation learning methods SimCLR [11] and MAE [33]. state of the art graph neural network on images proposed in [31]. After the model is trained without ', 'original_lines': 'tral fovea and larger in the periphery. This results in foveated imaging [70], enabling primates to retina sampling mechanism from Cheung et al. [12]. Like shown in Figure 3, each dot represents a CIFAR-10 with popular unsupervised representation learning methods SimCLR [10] and MAE [31]. state of the art graph neural network on images proposed in [29]. After the model is trained without ', 'after_paragraph_idx': 17, 'before_paragraph_idx': 17}, {'section': '3.2 V1 NEURAL RESPONSE TO NATURAL IMAGE STIMULUS', 'after_section': '3.2 V1 NEURAL RESPONSE TO NATURAL IMAGE STIMULUS', 'context_after': 'V1 neurons captured via two-photon calcium imaging. These neurons responded to 2,800 unique response. In the decoding task, a prediction is considered accurate if the neural response to a given stimulus in the first presentation closely matches the response to the same stimulus in the second presentation within the representation space. This task presents greater challenges than the synthetic ', 'paragraph_idx': 24, 'before_section': '3.2 V1 NEURAL RESPONSE TO NATURAL IMAGE STIMULUS', 'context_before': 'to challenge its generalizability with high-dimensional natural datasets. The first task is decoding neural response recording in the primary visual area (V1) of mice. ', 'modified_lines': 'V1 neural response dataset. The dataset, published by [52], contains responses from over 10,000 images from ImageNet [18], with each image presented twice to assess the consistency of the neural ', 'original_lines': 'V1 neural response dataset. The dataset, published by [50], contains responses from over 10,000 images from ImageNet [16], with each image presented twice to assess the consistency of the neural ', 'after_paragraph_idx': 24, 'before_paragraph_idx': 23}, {'section': '3.1 SYNTHETIC BIOLOGICAL VISION DATASETAs discussed in the introduction, the biological visual signal serves as an ideal data modality to validate the capability of URLOST. In contrast to digital images captured by a fixed array of sensors, thebiological visual signal is acquired through irregularly positioned ganglion cells, inherently lackingexplicit topology and stationarity. However, it is hard to collect real-world biological vision signals', 'after_section': None, 'context_after': 'tiling a low dimensional manifold. This insight led us to treat the population neuron response as high-dimensional data and explore whether URLOST can effectively learn its representation. malization to the neural firing rate. The processed signals are high-dimensional vectors, and they can be directly used for the decoding task, which serves as the “raw” signal baseline in Table 2. For 6 ', 'paragraph_idx': 19, 'before_section': None, 'context_before': 'recordings rather than a curated dataset like CIFAR-10. For another, the geometric structure of the V1 area is substantially more intricate than that of the retina. To date, no precise mathematical model of the V1 neural response has been well established. The inherent topology and stationarity ', 'modified_lines': 'of the data still remain difficult to grasp [51; 50]. Nevertheless, evidence of retinotopy [24; 25] and findings from prior research [49; 12; 67] suggest that the neuron population code in V1 are Experiments. Following the approach in Pachitariu et al. [52] we apply standardization and nor- representation learning methods, URLOST is evaluated along with MAE and β-VAE [34]. For MAE ', 'original_lines': 'of the data still remain difficult to grasp [49; 48]. Nevertheless, evidence of retinotopy [22; 23] and findings from prior research [47; 11; 65] suggest that the neuron population code in V1 are Experiments. Following the approach in Pachitariu et al. [50] we apply standardization and nor- representation learning methods, URLOST is evaluated along with MAE and β-VAE [32]. For MAE ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.3 GENE EXPRESSION DATA', 'after_section': '3.3 GENE EXPRESSION DATA', 'context_after': 'which is a project that catalogs the genetic mutations responsible for cancer using genome sequenc- ing and bioinformatics. The project molecularly characterized over 20,000 primary cancers and matched normal samples spanning 33 cancer types. We focus on the pan-cancer classification task: ', 'paragraph_idx': 29, 'before_section': '3.3 GENE EXPRESSION DATA', 'context_before': 'In this subsection, we further evaluate URLOST on high-dimensional natural science data from a completely different domain, the gene expression data. ', 'modified_lines': 'Gene expression dataset. The dataset comes from The Cancer Genome Atlas (TCGA) [71; 75], ', 'original_lines': 'Gene expression dataset. The dataset comes from The Cancer Genome Atlas (TCGA) [69; 73], ', 'after_paragraph_idx': 29, 'before_paragraph_idx': 28}, {'section': '3.3 GENE EXPRESSION DATA', 'after_section': '3.3 GENE EXPRESSION DATA', 'context_after': 'performance in Table 2. Again, our method learns meaningful representation from the original sig- nal. The learned representation benefited the classification task and achieved the best performance, demonstrating URLOST’s ability to learn meaningful representation of data from diverse domains. ', 'paragraph_idx': 31, 'before_section': '3.3 GENE EXPRESSION DATA', 'context_before': 'Experiments. Similar to Section 3.2, URLOST is compared with the original signals, MAE, and β-VAE, which is the state-of-the-art unsupervised learning method on TCGA cancer classification ', 'modified_lines': '[81; 82]. We also randomly partition the dataset do five-fold cross-validation and report the average ', 'original_lines': '[78; 79]. We also randomly partition the dataset do five-fold cross-validation and report the average ', 'after_paragraph_idx': 31, 'before_paragraph_idx': 31}, {'section': '4.2 SELF-ORGANIZING LAYER VS SHARED PROJECTION LAYER', 'after_section': '4.2 SELF-ORGANIZING LAYER VS SHARED PROJECTION LAYER', 'context_after': 'considered as a projection layer that is shared among all elements in the input sequence. The self- organizing layer g(·, w(i)) introduced in Section 2.3 can be considered as a non-shared projection layer. We conducted an ablation study comparing the two designs to demonstrate the effectiveness of ', 'paragraph_idx': 33, 'before_section': '4.2 SELF-ORGANIZING LAYER VS SHARED PROJECTION LAYER', 'context_before': '(3) which is further processed by a neural network. The sequential inputs can be a list of language tokens ', 'modified_lines': '[20; 58], pixel values [10], image patches [21], or overlapped image patches [11; 32; 80]. E can be ', 'original_lines': '[18; 56], pixel values [9], image patches [19], or overlapped image patches [10; 30; 77]. E can be ', 'after_paragraph_idx': 33, 'before_paragraph_idx': 33}, {'section': '5 ADDITIONAL RELATED WORKS', 'after_section': '5 ADDITIONAL RELATED WORKS', 'context_after': 'Kohonen in 1982 first come up with self-orgnaizing map (SOM). The algorithm produces a low- dimensional representation of a higher-dimensional dataset while preserving the topological struc- bling” by mapping pixels into a 2D index set by leveraging the mutual information between pixels learning. The community tried to feed the “recovered topology” to a graph neural network (GNN) and neural circuits in the retina result in a more irregular 2-D topology than the natural image, which visual cortex. Evidence of retinotopy suggests the low-dimensional geometry of visual input from 9 ', 'paragraph_idx': 45, 'before_section': '5 ADDITIONAL RELATED WORKS', 'context_before': 'Several interconnected pursuits are linked to this work, and we will briefly address them here: Topology in biological visual signal. 2-D topology of natural images is strong prior that requires ', 'modified_lines': 'many bits to encode [19; 3]. Such 2-D topology is encoded in the natural image statistic [64; 36]. ture of the data [39]. It is also motivated to solve the “unscramble” pixels problem by “descram- at different locations. More detail in Appendix A.12. [61] tackles the same problem with manifold [6], but suffered from inherent scalability issues on using GNN to do unsupervised learning. Optic can still be simulated [60; 54; 55; 53; 70; 38]. This information is further processed by the primary ', 'original_lines': 'many bits to encode [17; 2]. Such 2-D topology is encoded in the natural image statistic [62; 34]. ture of the data [37]. It is also motivated to solve the “unscramble” pixels problem by “descram- at different locations. More detail in Appendix A.12. [59] tackles the same problem with manifold [5], but suffered from inherent scalability issues on using GNN to do unsupervised learning. Optic can still be simulated [58; 52; 53; 51; 68; 36]. This information is further processed by the primary ', 'after_paragraph_idx': 45, 'before_paragraph_idx': 45}, {'section': '5 ADDITIONAL RELATED WORKS', 'after_section': '5 ADDITIONAL RELATED WORKS', 'context_after': 'evidence that topology under retinal ganglion cell and V1 neurons can be recovered. Evidence of self-organizing mechanism in the brain. In neuroscience, many works use the self- putations. Even though V1 functional organizations are present at birth, numerous studies indicate Learning with signal on non-euclidean geometry. In recent years, researchers from the machine learning community have made efforts to consider geometries and special structures beyond classic images, text, and feature vectors. This is the key motivation for geometric deep learning and graph neural networks (GNN). Many works generalizes common operator for processing Euclidean data We compared their proposed neural network architecture with the ViT backbone used in URLOST. Recent research also explores adapting the Transformer to domains beyond Euclidean spaces such URLOST employs a single mutual information graph to define the topology of the high-dimensional corresponds to a distinct graph. It segments a graph into ”subgraphs,” processes them with a GNN, and then passes the output to a transformer. This approach is undeniably more flexible but requires all subgraphs to be globally aligned. Furthermore, the self-organizing layer in URLOST generalizes Self-supervised learning. Self-supervised learning (SSL) has made substantial progress in recent years. Different SSL method is designed for each modality, for example: predicting the masked/next descent scalability with a vast amount of unlabeled data and have shown their power by achieving performance on par with or even surpassing supervised methods. They have also exhibited huge 6 DISCUSSION The success of most current state-of-the-art self-supervised representation learning methods relies on unsupervised representation learning under a more general assumption, where the stationarity and topology of the data are unknown to the machine learning model and its designers. We argue that this is a general and realistic assumption for high-dimensional data in modalities of natural science. ', 'paragraph_idx': 45, 'before_section': None, 'context_before': '538 539 ', 'modified_lines': 'retina is encoded by the neuron in primary visual cortex [48; 26; 35; 25; 74; 56]. These study provide organizing maps (SOM) as a computational model for V1 functional organization [22; 69; 1; 23; 47; 39]. In other words, this idea of self-organizing is a principle governing how the brain performs com- that the brain’s self-organizing mechanisms continue after full development [29; 62; 37]. like 2d convolution and attention [6; 45; 17; 27]. Due to the natural of graph neural network, they often only work on limited data regime and do not scale to large data [63; 9; 8]. However, recent development in [31] shows this direction is prominent. They successfully scales GNN to ImageNet. as [15; 14; 28]. Ma et al. [41] treats an image as a set of points but relies on 2D coordinates. signal. Gao et al. [28], on the other hand, is designed to handle graph data, where each data point the ”patch resizer” mechanism from FlexiViT used in Beyer et al. [4] . token in NLP[20; 58; 7], solving pre-text tasks, predicting masked patches, or building contrastive image pairs in computer vision [44; 33; 78; 11; 32; 80]. These SSL methods have demonstrated potential in cross-modal learning, such as the CLIP by Radford et al. [59]. the assumption that the data has known stationarity and domain topology. In this work, we explore ', 'original_lines': 'retina is encoded by the neuron in primary visual cortex [46; 24; 33; 23; 72; 54]. These study provide organizing maps (SOM) as a computational model for V1 functional organization [20; 67; 1; 21; 45; 37]. In other words, this idea of self-organizing is a principle governing how the brain performs com- that the brain’s self-organizing mechanisms continue after full development [27; 60; 35]. like 2d convolution and attention [5; 43; 15; 25]. Due to the natural of graph neural network, they often only work on limited data regime and do not scale to large data [61; 8; 7]. However, recent development in [29] shows this direction is prominent. They successfully scales GNN to ImageNet. as [14; 13; 26]. Ma et al. [39] treats an image as a set of points but relies on 2D coordinates. signal. Gao et al. [26], on the other hand, is designed to handle graph data, where each data point the ”patch resizer” mechanism from FlexiViT used in Beyer et al. [3] . token in NLP[18; 56; 6], solving pre-text tasks, predicting masked patches, or building contrastive image pairs in computer vision [42; 31; 75; 10; 30; 77]. These SSL methods have demonstrated potential in cross-modal learning, such as the CLIP by Radford et al. [57]. the assumption that the data has known stationarity and domain topology, such as the grid-like RGB images and time sequences. However, biological vision systems have evolved to deal with signals with less regularity, which allows it to develop more efficient sensors. In this work, we explore ', 'after_paragraph_idx': 45, 'before_paragraph_idx': None}, {'section': '3.3 GENE EXPRESSION DATA', 'after_section': None, 'context_after': 'a modification to adjust the density: 1. Define D to be the diagonal matrix whose (i,i)-element is the sum of A’s i-th row. Con- ', 'paragraph_idx': 30, 'before_section': None, 'context_before': 'Let Aij = I(Si; Sj) be the affinity matrix, and let the density adjustment matrix be P defined in 2.2. Correlation is used instead of mutual information when the dimension is really high since computing ', 'modified_lines': 'mutual information is expensive. We follow the steps from [46] to perform spectral clustering with ', 'original_lines': 'mutual information is expensive. We follow the steps from [44] to perform spectral clustering with ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.2 SELF-ORGANIZING LAYER VS SHARED PROJECTION LAYER', 'after_section': None, 'context_after': 'The location of each kernel is illustrated in Figure 8.B. The radius of the kernel scales proportionally to the eccentricity. Here, we use the distance from the kernel to the center to represent eccentricity. The relationship between the radius of the kernel and eccentricity is shown in Figure 8.C. As men- ', 'paragraph_idx': 33, 'before_section': None, 'context_before': '84.52 % 85.43 % 83.83 % 81.62 % ', 'modified_lines': 'pixel value and the corresponding discrete Gaussian kernel. This can be formulated as: G[i] = N (cid:88) W (cid:88) n m K(⃗xi, σ′ i)[n, m]I[n, m] where N and W are dimensions of the image, and I represents the image pixels. For foveated CIFAR-10, since the image is very low resolution, we first upsample it 3 times from 32 × 32 to 96 × 96, then use in total of 1038 Gaussian kernels to sample from the upsampled image. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'density spectral clustering, which is shown in Table 5. Meanwhile, setting α and β too large will also hurt the model’s performance because it creates clusters that are too unbalanced. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'small eccentricity and are more correlated to their neighbor, increasing α and β will make sampling kernels at the center have higher density, which makes the cluster at the center larger. This is why URLOST with density-adjusted spectral clustering performs better than URLOST with constant ', 'modified_lines': '', 'original_lines': ' 17 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '18 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'represents the pixel basis at location k. Permutation matrix E(i) will send kth pixel to some location accordingly. Mathematically, if the projection layer effectively aligns the input sequence, it means g(E(j)ek, W (j)) = g(E(i)ek, W (i)) for all i, j, k. We can further expand this property to get the ', 'modified_lines': '', 'original_lines': 'following two equations: g(E(i)ek, W (i)) = W (i)E(i)ek g(E(j)ek, W (j)) = W (j)E(j)ek for all i, j, k. Since the above equation holds for all ek, by linearity and the property of permutation matrix, we have: W (i)E(i) = W (j)E(j) E(i)T W (i) = E(j)T W (j) This implies E(i)T w(i) should exhibit visual similarities for all i. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Figure 7: Effect of spectral clustering hyperparameters on performance (A) Effect of number of cluster on performance. (B) Effect of normalization parameter for spectral clustering (β in Eq. 4) on performance. (C) Effect of numbers of neighbors from K nearest neighbors on performance. of single kernel filter parameterized by a mean µ′ and variance σ′. (B) the location of each Gaussian kernel is summarized as a point with 2D coordinate µ′. In total, the locations of 1038 Gaussian kernels are plotted. (C) The relationship between eccentricity (distance of the kernel to the center) and radius of the kernel is shown. A.8 TOPOLOGY AND STATIONARITY ', 'paragraph_idx': 6, 'before_section': None, 'context_before': 'We perform an experiment doing a grid search over each hyperparameter in spectral clustering that affects the performance of URLOST. The grid search experiment is performed over the TCGA gene ', 'modified_lines': 'classification dataset. As shown in Figure 7, the model is not sensitive for most of the hyper- parameters. For a large range of hyperparameters, the model performance stays the same. Figure 8: Foveated retinal sampling (A) Illustration of a Guassian kernel shown in [13]. Diagram ', 'original_lines': ' 19 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 Under review as a conference paper at ICLR 2025 Figure 8: Foveated retinal sampling (A) Illustration of a Guassian kernel shown in [12]. Diagram 20 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 Under review as a conference paper at ICLR 2025 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 Table 6: Ratio between Parameters of Self-Organizing Layers and Parameters of the Entire Model. We report the model used for each dataset. Dataset Permuted Cifar10 TCGA V1 Ratio 6.7% 15.6% 5.9% classification dataset. As shown in Figure 7, the model is not sensitive for most of the hyper- parameters. For a large range of hyperparameters, the model performance stays the same. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-27 04:29:59
ICLR.cc/2025/Conference
x9gzhruVVl
jlOqxkX90b
[{'section': 'Abstract', 'after_section': None, 'context_after': '22 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'organizing map” (SOM) algorithm to address this problem. SOM is an unsupervised machine learn- ing technique used to produce a low-dimensional (typically two-dimensional) representation of a higher-dimensional dataset while preserving the topological structure of the data. In the task of “de- ', 'modified_lines': '', 'original_lines': 'scrambling” pixels, SOM maps each pixel into a 2D index set by leveraging the mutual information between pixels at different locations. The parameters of SOM include a set of neurons, where each neuron has a location index. Let Wv denote the weight vector of the vth neuron and D the input vector. θ(u, v, s) is the interaction function that determines the influence between the uth and vth neurons based on their distance. The update equation for SOM is given by: Wv(s + 1) = Wv(s) + θ(u, v, s) · α(s) · (D(t) − Wv(s)) In the task of “descrambling” pixels, the input consists of pixel values. The indices will be two- dimensional, such as s = (i, j), and Wv(s) will learn to represent a set of “descrambled” pixels, ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}]
2024-11-27 07:16:30
ICLR.cc/2025/Conference
jlOqxkX90b
BkrbtKMneg
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 INTRODUCTION The success of state-of-the-art self-supervised representation learning (SSL) methods largely de- pends on these two crucial assumptions. For example, in computer vision, popular SSL techniques, construction of image patches. Both approaches typically rely on convolutional neural networks convolutional filters or linear layers, which inherently exploit the stationarity and regular topology present in natural images. The geometric deep learning community has made significant efforts to What if we have high-dimensional signals without prior knowledge of their domain topology or stationarity? Can we still craft a high-quality representation? This is not only the situation that biological intelligence systems have to deal with but also a practical setting for many scientific data cameras. Biological visual systems, on the other hand, have to deal with signals with less domain regularity. Unlike the uniform grid in camera sensors, the cones and rods in the retina are distributed unevenly and non-uniformly. Yet, biological visual systems can establish a precise retinotopy map from the retina to neurons in visual cortex based on spontaneous, locally-propagated retinal activities This motivates us to build unsupervised representations without relying on the prior stationarity of the raw signal or the topology of the input domain. The ability to build unsupervised representations without relying on topology and stationarity has huge advantages. For example, it allows humans to ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'ods that are not dependent on these factors, setting a new benchmark in the field. We position this work as a step toward unsupervised learning methods capable of generalizing across diverse high-dimensional data modalities. Code is available at ', 'modified_lines': 'this repository. Unsupervised representation learning aims to develop models that autonomously detect patterns in data and make these patterns readily apparent through specific representations. Over the past few years, there has been tremendous progress in the unsupervised representation learning com- munity, especially self-supervised representation learning (SSL) method. Popular methods SSL methods like contrastive learning and masked autoencoding [87; 12; 33; 90] work well on typical data modalities such as images, videos, time series, and point clouds. However, these methods make implicit assumptions about the data domain’s topology and stationarity. Topology refers to the low-dimensional structure arisen from physical measurements, such as the pixel grids in images, the temporal structures in time series and text sequences, or the 3D structures in molecules and point clouds [7]. Stationarity refers to the characteristic that the statistical properties of the signal are invariant across its domain [6]. For instance, the statistics of pixels and patches in images are in- variant to their spatial locations. A vase is still a vase no matter placed at the corner or the center of the image. such as Masked Autoencoders [34] and joint-embedding methods like SimCLR [12] require the (CNNs) or Vision Transformer (ViT) backbones [22]. These backbones consist of shared-weight extend machine learning to domains beyond those with regular topology. However, graph neural net- work (GNN)-based methods empirically struggle to scale with self-supervised learning objectives and large datasets [69; 10; 9]. Methods that do scale well with large data still assume a minimal level of stationarity and regular topology [32]. 1 Published as a conference paper at ICLR 2025 analysis problems. Taking images as an example, computer vision system takes in signal from digital and external stimuli [86; 47; 25] and leverage retinotopic input to build unsupervised representations. ', 'original_lines': 'this anonymous repository. Unsupervised representation learning aims to develop models that autonomously detect patterns in data and make these patterns readily apparent through specific representations. Over the past few years, there has been tremendous progress in the unsupervised representation learning community. Popular methods are self-supervised representation learning (SSL) method like contrastive learning and masked autoencoding [78; 11; 32; 80] work well on typical data modalities such as images, videos, time series, and point clouds. However, these methods make implicit assumptions about the data domain’s topology and stationarity. We provide formal definition of both concept in Ap- pendix A.8 in the context our this work. We also provide a high level idea of these two concept: Topology refers to the low-dimensional structure arised from physical measurements, such as the pixel grids in images, the temporal structures in time series and text sequences, or the 3D structures in molecules and point clouds [6]. Stationarity refers to the characteristic that the statistical proper- ties of the signal are invariant across its domain [5]. For instance, the statistics of pixels and patches in images are invariant to their spatial locations. A vase is still a vase no matter placed at the corner or the center of the image. such as Masked Autoencoders [33] and joint-embedding methods like SimCLR [11] require the (CNNs) or Vision Transformer (ViT) backbones [21]. These backbones consist of shared-weight extend machine learning to domains beyond those with regular topology. However, graph neural network (GNN)-based methods empirically struggle to scale with self-supervised learning objec- tives and large datasets [63; 9; 8]. Methods that do scale well with large data still assume a minimal level of stationarity and regular topology [31]. 1 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 Under review as a conference paper at ICLR 2025 analysis problems. Taking images as an example, computer vision system takes in singal from digital and external stimuli [77; 43; 24] and leverage retinotopic input to build unsupervised representations. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'ods don’t. In this work, we aim to build unsupervised representations for general high-dimensional data and introduce unsupervised representation learning without stationarity or topology (URLOST). Tak- ing images as an example again, let’s assume we receive a set of images whose pixels are shuffled pixels, which recovers a coarse topology of the input domain. These clusters are analogous to image patches except that they are slightly irregularly shaped and different in size. We mask a proportion the remaining unmasked ones. This “learning to predict masked tokens” approach is proposed in we test the proposed method on the synthesized biological visual dataset, derived from the CIFAR- 2 EncEncℎ!ℎ"DecEnc"""!ℎBiological Vision SystemData w/ stationarity and topology Data w/o stationarity and topology… many more high -dimensional datain natureℎℎ?URLOSTNew ParadigmJoint EmbeddingMAE Figure 2: The overview framework of URLOST. The high-dimensional input signal undergoes clustering and self-organization before unsupervised learning using a masked autoencoder for signal reconstruction. chemistry, biology, and neuroscience, our method offers a promising approach in the effort to build unsupervised representations for high-dimensional data. 2 METHOD 2.1 MOTIVATION AND OVERALL FRAMEWORK Our objective is to build robust unsupervised representations for high-dimensional signals without prior information on explicit topology and stationarity. The learned representations are intended to enhance performance in downstream tasks such as classification. We begin by using low-level ', 'paragraph_idx': 7, 'before_section': '1 INTRODUCTION', 'context_before': 'Figure 1: From left to right: the unsupervised representation learning through joint embedding and masked auto-encoding; the biological vision system that perceives via unstructured sensor and un- ', 'modified_lines': 'derstands signal without stationarity or topology [61]; and many more such diverse high dimensional signal in natural science [57; 89] that our method supports while most existing unsupervised meth- in the same order. In this case, the original definition of topology of images is destroyed, i.e. each pixel should no longer the neighbor of the pixels that’s physically next to it. How can we build representations in an unsupervised fashion without knowledge of the shuffling order? If possible, can we use such a method to build unsupervised representations for general high-dimensional data? Inspired by Roux et al. [67], we use low-level statistics and spectral clustering to form clusters of the of these “patches” and utilize a Vision Transformer [22] to predict the masked “patches” based on masked autoencoders (MAE) [34] and has demonstrated effectiveness on typical modalities. Firstly, 10 [44] using a foveated retinal sampling mechanism [14]. Then we generalize this method to two high-dimensional vector datasets: a primary visual cortex neural response decoding dataset [73] and the TCGA miRNA-based cancer classification dataset [79; 84]. Across all these benchmarks, our proposed method outperforms existing SSL methods, demonstrating its effectiveness in building unsupervised representations for signals lacking explicit stationarity or topology. Given the emer- gence of new modalities in deep learning from natural sciences [75; 31; 62; 46; 85; 17; 2], such as Published as a conference paper at ICLR 2025 ', 'original_lines': 'derstands signal without stationarity or topology [56]; and many more such diverse high dimensional signal in natural science [52; 79] that our method supports while most existing unsupervised meth- in the same order. The original definition of topology of images is destroyed, i.e. each pixel should no longer the neighbor of the pixels that’s physically next to it. How can we build representations in an unsupervised fashion without knowledge of the shuffling order? If possible, can we use such a method to build unsupervised representations for general high-dimensional data? We could figure out the neighbors of each pixel by use low-level statistics of among these pixels over many images. Inspired by Roux et al. [61], we use low-level statistics and spectral clustering to form clusters of the of these “patches” and utilize a Vision Transformer [21] to predict the masked “patches” based on masked autoencoders (MAE) [33] and has demonstrated effectiveness on typical modalities. Firstly, 10 [40] using a foveated retinal sampling mechanism [13]. Then we generalize this method to two high-dimensional vector datasets: a primary visual cortex neural response decoding dataset [66] and the TCGA miRNA-based cancer classification dataset [71; 75]. Across all these benchmarks, 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 Under review as a conference paper at ICLR 2025 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 our proposed method outperforms existing SSL methods, demonstrating its effectiveness in building unsupervised representations for signals lacking explicit stationarity or topology. Given the emer- gence of new modalities in deep learning from natural sciences [68; 30; 57; 42; 76; 16; 2], such as ', 'after_paragraph_idx': 7, 'before_paragraph_idx': 7}, {'section': 'Abstract', 'after_section': None, 'context_after': 'The optimization problem becomes the following: min ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'A clustering algorithm is then applied to the embedding Y as explained in appendix A.2. The size and shape of the clusters strongly affect the unsupervised learning performance. To adjust the size and shape, we apply a density adjustment matrix P to adjust L in the spectral embedding objective. ', 'modified_lines': '', 'original_lines': ' 3 Raw SignalAligned ClustersSignal ClusteringSelf-organizingLayerEncDecMaskedAutoencoderRe-organizingLayerReconstructedSignal Under review as a conference paper at ICLR 2025 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 RESULT', 'after_section': '3 RESULT', 'context_after': '3.1 SYNTHETIC BIOLOGICAL VISION DATASET As discussed in the introduction, the biological visual signal serves as an ideal data modality to vali- date the capability of URLOST. In contrast to digital images captured by a fixed array of sensors, the biological visual signal is acquired through irregularly positioned ganglion cells, inherently lacking explicit topology and stationarity. However, it is hard to collect real-world biological vision signals with high precision and labels to evaluate our algorithm. Therefore, we employ a retinal sampling technique to modify the classic CIFAR-10 dataset and simulate imaging from the biological vision signal. The synthetic dataset is referred to as Foveated CIFAR-10. To make a comprehensive com- ', 'paragraph_idx': 11, 'before_section': '3 RESULT', 'context_before': 'two high-dimensional natural datasets collected from diverse domains. Detailed information about each dataset and the corresponding experiments is presented in the following subsections. Across all tasks, URLOST consistently outperforms other strong unsupervised representation learning meth- ', 'modified_lines': 'ods. For all the experiments in this work, we use linear layer to parametrize the self-organizing layer, i.e. g(x, W (i)) = W (i)x. Additionally, we provide training hyperparameters and experiments for effect of hyperparameters in Appendix A.6. ', 'original_lines': 'ods. Code is provided at this repository for reproducibility. For all the experiments in this work, we use linear layer to parametrize the self-organizing layer, i.e. g(x, W (i)) = W (i)x. Additionally, we provide training hyperparameters and intuition for how to pick these hyperparameters in the Ap- pendix A.6. We also provide additional experiments on how these hyperparameters affect model’s performance in Appendix A.6. 4 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': 11, 'before_paragraph_idx': 11}, {'section': '3 RESULT', 'after_section': '3 RESULT', 'context_after': 'Foveated CIFAR-10. Unlike cameras that use uniform photosensors with consistent sampling pat- terns, the human retina features ganglion cells whose receptive field sizes vary—smaller in the cen- have both high-resolution central vision and a wide overall receptive field. However, this distinctive sampling pattern causes visual signals in the retina to lack stationarity; the statistical properties differ between the center and periphery, with ganglion cell responses being highly correlated in the fovea but less correlated in the periphery. To mimic the foveated imaging with CIFAR-10, we adopt the retinal ganglion cell, which together form a non-stationary sampling lattice with irregular topology. Details on the implementation of foveation sampling is provided in Appendix A.3. Experiments. We compare URLOST on both of the synthetic vision datasets as well as the original We also compared the ViT backbone used in URLOST to other backbones such as convolutional neural network (CNN). Moreover, since CNN works better with stationary signals, we further com- pared our methods to SimCLR with a graph neural network (GNN) backbone, which doesn’t rely on the stationarity assumption of the data. For the GNN baseline, we use Vision GNN (ViG), which is supervision, we use linear probing to evaluate it. This is achieved by training a linear classifier on top of the pre-trained model with the given labels. The evaluations are reported in Table 1. SimCLR excels on CIFAR-10 but struggles with both synthetic datasets due to its inability to handle data ', 'paragraph_idx': 14, 'before_section': '3 RESULT', 'context_before': 'value is displayed at its respective lattice location for visualization purposes. (D) density-adjusted spectral clustering results are shown. Each unique color represents a cluster, with each kernel col- ored according to its assigned cluster. ', 'modified_lines': 'tral fovea and larger in the periphery. This results in foveated imaging [80], enabling primates to retina sampling mechanism from Cheung et al. [14]. Like shown in Figure 3, each dot represents a CIFAR-10 with popular unsupervised representation learning methods SimCLR [12] and MAE [34]. state of the art graph neural network on images proposed in [32]. After the model is trained without ', 'original_lines': ' tral fovea and larger in the periphery. This results in foveated imaging [72], enabling primates to retina sampling mechanism from Cheung et al. [13]. Like shown in Figure 3, each dot represents a CIFAR-10 with popular unsupervised representation learning methods SimCLR [11] and MAE [33]. state of the art graph neural network on images proposed in [31]. After the model is trained without ', 'after_paragraph_idx': 14, 'before_paragraph_idx': 14}, {'section': '3 RESULT', 'after_section': None, 'context_after': '5 Table 1: Evaluation on computer vision and synthetic biological vision dataset. ViT (Patch) stands for the Vision Transformer backbone with image patches as inputs. ViT (Pixel) means pixels ', 'paragraph_idx': 12, 'before_section': None, 'context_before': 'formances when there is no topology or stationarity, achieving 86.4% on Permuted CIFAR-10 and 85.4% on Foveated CIFAR-10 when the baselines completely fail. ', 'modified_lines': '3.2 V1 NEURAL RESPONSE TO NATURAL IMAGE STIMULUS After testing URLOST’s performance on synthetic biological vision data, we take a step further to challenge its generalizability with high-dimensional natural datasets. The first task is decoding neural response recording in the primary visual area (V1) of mice. V1 neural response dataset. The dataset, published by [57], contains responses from over 10,000 V1 neurons captured via two-photon calcium imaging. These neurons responded to 2,800 unique images from ImageNet [19], with each image presented twice to assess the consistency of the neural response. In the decoding task, a prediction is considered accurate if the neural response to a given Published as a conference paper at ICLR 2025 ', 'original_lines': '216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 RESULT', 'after_section': None, 'context_after': 'stimulus in the first presentation closely matches the response to the same stimulus in the second presentation within the representation space. This task presents greater challenges than the synthetic biological vision described in the prior section. For one, the data comes from real-world neural recordings rather than a curated dataset like CIFAR-10. For another, the geometric structure of the V1 area is substantially more intricate than that of the retina. To date, no precise mathematical model of the V1 neural response has been well established. The inherent topology and stationarity tiling a low dimensional manifold. This insight led us to treat the population neuron response as high-dimensional data and explore whether URLOST can effectively learn its representation. malization to the neural firing rate. The processed signals are high-dimensional vectors, and they can be directly used for the decoding task, which serves as the “raw” signal baseline in Table 2. For baseline, we obtain patches by randomly selecting different dimensions from the signal. Note that the baseline methods need to handle high-dimensional vector data without stationarity or topology. Since SimCLR and other constrastive learning model leverage these two properties to make positive ', 'paragraph_idx': 13, 'before_section': None, 'context_before': '94.15% ± 0.24% 94.90% ± 0.25 % ', 'modified_lines': 'of the data still remain difficult to grasp [56; 55]. Nevertheless, evidence of retinotopy [25; 26] and findings from prior research [54; 13; 74] suggest that the neuron population code in V1 are Experiments. Following the approach in Pachitariu et al. [57] we apply standardization and nor- representation learning methods, URLOST is evaluated along with MAE and β-VAE [35]. For MAE ', 'original_lines': '3.2 V1 NEURAL RESPONSE TO NATURAL IMAGE STIMULUS After testing URLOST’s performance on synthetic biological vision data, we take a step further to challenge its generalizability with high-dimensional natural datasets. The first task is decoding neural response recording in the primary visual area (V1) of mice. V1 neural response dataset. The dataset, published by [52], contains responses from over 10,000 V1 neurons captured via two-photon calcium imaging. These neurons responded to 2,800 unique images from ImageNet [18], with each image presented twice to assess the consistency of the neural response. In the decoding task, a prediction is considered accurate if the neural response to a given of the data still remain difficult to grasp [51; 50]. Nevertheless, evidence of retinotopy [24; 25] and findings from prior research [49; 12; 67] suggest that the neuron population code in V1 are Experiments. Following the approach in Pachitariu et al. [52] we apply standardization and nor- representation learning methods, URLOST is evaluated along with MAE and β-VAE [34]. For MAE 6 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 Under review as a conference paper at ICLR 2025 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 Masking Unit Permuted Cifar10 Foveated Cifar10 TCGA Gene V1 Response Clusters (URLOST) Random patch Individual dimension 86.4% 55.7% 56.7% 85.4% 51.1% 48.5% 94.9% 91.7% 88.3% 78.8% 73.9% 64.8% Table 3: Performance of different datasets using different masking units. “Clusters (URLOST)” refers to the clusters formed using pairwise mutual information and spectral clustering. “Random patch” refers to patches formed by aggregating random dimensions. “Individual dimension” refers to using individual dimensions as masking units without any aggregation. The model is trained for the mask prediction task, and the probing accuracy is reported in the table. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 RESULT', 'after_section': '3 RESULT', 'context_after': 'performance in Table 2. Again, our method learns meaningful representation from the original sig- nal. The learned representation benefited the classification task and achieved the best performance, demonstrating URLOST’s ability to learn meaningful representation of data from diverse domains. ', 'paragraph_idx': 24, 'before_section': '3 RESULT', 'context_before': 'Experiments. Similar to Section 3.2, URLOST is compared with the original signals, MAE, and β-VAE, which is the state-of-the-art unsupervised learning method on TCGA cancer classification ', 'modified_lines': '[91; 92]. We also randomly partition the dataset do five-fold cross-validation and report the average ', 'original_lines': '[81; 82]. We also randomly partition the dataset do five-fold cross-validation and report the average ', 'after_paragraph_idx': 24, 'before_paragraph_idx': 24}, {'section': '4 ABLATION STUDY', 'after_section': '4 ABLATION STUDY', 'context_after': '4.2 SELF-ORGANIZING LAYER VS SHARED PROJECTION LAYER ', 'paragraph_idx': 25, 'before_section': None, 'context_before': '4 ABLATION STUDY 4.1 AGGREGATING SIMILAR DIMENSIONS FOR MASKING UNIT ', 'modified_lines': 'Clustering is at the heart of the proposed method. Why should similar dimensions be aggregated to form a masking unit? Intuitively, we assume that similar dimensions are used to sample similar re- gions of the underlying signals. Instead of masking each dimension, making the entire cluster force the model to learn high-level structure from the signal, allowing the model to learn a rich represen- tation. For example, for the CIFAR10 example, similar pixels tend to sample similar regions, which together form image patches. For V1 data, each neuron encodes a simple pattern like an oriented edge, but neurons together code patterns like shape and contours. We provide experiments for using different masking units, as shown in table 3, aggregating similar dimensions significantly improves the performance. ', 'original_lines': ' The clustering is the heart of the proposed method. Why should similar dimensions be aggregated to form a masking unit? We provide both intuition and ablation experiments to answer this question. First, aggregating dimensions create a nontrivial unsupervised learning task for the model. Making individual dimension will lead to trivial solutions. To fill in the missing value of a dimension, the model tends to learn to use low-level information such as the averaged value of similar dimensions. But why aggregating similar dimensions instead of random dimensions? We assume similar dimen- sions are used to sampling similar region of the underlying signals. Clustering of similar dimensions contains underlying patterns. Model learn these patterns to form a good representation through the mask-prediction task. For example, for the CIFAR10 example, similar pixels tend to sample seman- tically similar regions, which together form image patches. For V1 data, each neurons code simple pattern like oriented edge but neurons together code patterns like shape and contours. Predicting at the shape level is a non-trival SSL task. We also provide experiments for using different masking units, as shown in table 3, aggregating similar dimensions significantly improves the performance. 7 Under review as a conference paper at ICLR 2025 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 ', 'after_paragraph_idx': 25, 'before_paragraph_idx': None}, {'section': '4 ABLATION STUDY', 'after_section': '4 ABLATION STUDY', 'context_after': 'which is further processed by a neural network. The sequential inputs can be a list of language tokens considered as a projection layer that is shared among all elements in the input sequence. The self- organizing layer g(·, w(i)) introduced in Section 2.3 can be considered as a non-shared projection layer. We conducted an ablation study comparing the two designs to demonstrate the effectiveness of the self-organizing layers both quantitatively and qualitatively. To facilitate the ablation, we further synthesized another dataset. Locally-permuted CIFAR-10. To directly evaluate the performance of the non-shared projection approach, we designed an experiment involving intentionally misaligned clusters. In this experi- ', 'paragraph_idx': 25, 'before_section': '4 ABLATION STUDY', 'context_before': 'vectors with a linear transformation: z0 = [Ex(1), · · · Ex(M )] ', 'modified_lines': '[21; 63], pixel values [11], image patches [22], or overlapped image patches [12; 33; 90]. E can be (3) 7 Published as a conference paper at ICLR 2025 ', 'original_lines': ' (3) [20; 58], pixel values [10], image patches [21], or overlapped image patches [11; 32; 80]. E can be ', 'after_paragraph_idx': 25, 'before_paragraph_idx': 25}, {'section': '4 ABLATION STUDY', 'after_section': None, 'context_after': '4.3 DENSITY ADJUSTED CLUSTERING VS UNIFORM DENSITY CLUSTERING As explained in Section 2.2, the shape and size of each cluster depend on how the density function ', 'paragraph_idx': 30, 'before_section': None, 'context_before': 'layer also does some extra regular transformation to each patch. It is likely that the self-organizing layer learns to encode position information as transformations of some simple group structures. ', 'modified_lines': 'Figure 4: Learnt weights of a self-organizing layer. (A) Image is cropped into patches, where each patch x(i) first undergoes a different permutation E(i), then the inverse permutation E(i)T . (B) The learned weight of the linear self-organizing layer. The 12th column of W (i) at all positions i are reshaped into patches and visualized. When W (i) undergoes the inverse permutation E(i)T , they show similar patterns. (C) Visualization of the 37th column of W (i). Similar to (B). ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'will make the prediction task too difficult. In either scenario, the model’s ability to learn effective representations is compromised. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'amounts of information (refer to Appendix A.4.). A balanced distribution of information across clus- ters enhances the model’s ability to learn meaningful representations. Without this balance, masking a low-information cluster makes the prediction task trivial, while masking a high-information cluster ', 'modified_lines': '', 'original_lines': ' 8 Under review as a conference paper at ICLR 2025 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 Figure 4: Learnt weights of a self-organizing layer. (A) Image is cropped into patches, where each patch x(i) first undergoes a different permutation E(i), then the inverse permutation E(i)T . (B) The learned weight of the linear self-organizing layer. The 12th column of W (i) at all positions i are reshaped into patches and visualized. When W (i) undergoes the inverse permutation E(i)T , they show similar patterns. (C) Visualization of the 37th column of W (i). Similar to (B). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 ADDITIONAL RELATED WORKS', 'after_section': '5 ADDITIONAL RELATED WORKS', 'context_after': 'Evidence of self-organizing mechanism in the brain. In neuroscience, many works use the self- putations. Even though V1 functional organizations are present at birth, numerous studies indicate Learning with signal on non-euclidean geometry. In recent years, researchers from the machine learning community have made efforts to consider geometries and special structures beyond classic images, text, and feature vectors. This is the key motivation for geometric deep learning and graph neural networks (GNN). Many works generalizes common operator for processing Euclidean data We compared their proposed neural network architecture with the ViT backbone used in URLOST. Recent research also explores adapting the Transformer to domains beyond Euclidean spaces such and then passes the output to a transformer. This approach is undeniably more flexible but requires all subgraphs to be globally aligned. Furthermore, the self-organizing layer in URLOST generalizes Self-supervised learning. Self-supervised learning (SSL) has made substantial progress in recent years. Different SSL method is designed for each modality, for example: predicting the masked/next descent scalability with a vast amount of unlabeled data and have shown their power by achieving performance on par with or even surpassing supervised methods. They have also exhibited huge 6 DISCUSSION, LIMITATIONS, AND FUTURE DIRECTION Unlike the self-organizing layer, the procedure for aggregating similar dimensions together is sepa- rate from the training of MAE. Given the effectiveness of the self-organizing layer, this design could 10 REFERENCES ', 'paragraph_idx': 35, 'before_section': '5 ADDITIONAL RELATED WORKS', 'context_before': 'Several interconnected pursuits are linked to this work, and we will briefly address them here: ', 'modified_lines': 'Topology in biological visual signal. 2-D topology of natural images is a strong prior that requires many bits to encode [20; 3]. Such 2-D topology is encoded in the natural image statistic [71; 37]. Optic and neural circuits in the retina result in a more irregular 2-D topology than the natural im- age, which can still be simulated [66; 59; 60; 58; 78; 41]. This information is further processed by the primary visual cortex. Evidence of retinotopy suggests the low-dimensional geometry of visual input from retina is encoded by the neuron in primary visual cortex [53; 27; 36; 26; 83; 61]. These study provide evidence that topology under retinal ganglion cell and V1 neurons can be recovered. The theory and computational model of how visual system code encodes such 2-D is well-studied in computational neuroscience. The self-organizing map (SOM) was proposed as the first compu- tational model by Kohonen in 1982. The algorithm produces a low-dimensional representation of a higher-dimensional dataset while preserving the topological structure of the data [43]. SOM is also motivated to solve the “unscramble” pixels problem by “descrambling” by mapping pixels into a 2D index set by leveraging the mutual information between pixels at different locations. More detail is in Appendix A.12. [67] tackles the same problem with manifold learning. organizing maps (SOM) as a computational model for V1 functional organization [23; 76; 1; 24; 52; 43]. In other words, this idea of self-organizing is a principle governing how the brain performs com- that the brain’s self-organizing mechanisms continue after full development [30; 68; 40]. like 2d convolution and attention [7; 49; 18; 28]. Due to the natural of graph neural network, they often only work on limited data regime and do not scale to large data [69; 10; 9]. However, recent development in [32] shows this direction is prominent. They successfully scales GNN to ImageNet. 9 Published as a conference paper at ICLR 2025 as [16; 15; 29]. Ma et al. [45] treats an image as a set of points but relies on 2D coordinates. UR- LOST employs a single mutual information graph to define the topology of the high-dimensional signal. Gao et al. [29], on the other hand, is designed to handle graph data, where each data point corresponds to a distinct graph. It segments a graph into “subgraphs,” processes them with a GNN, the “patch resizer” mechanism from FlexiViT used in Beyer et al. [4]. Finally, the lines of works follows the “perceiver” architecture is very related to URLOST [39; 38]. “Perceiver” also implic- itly aggregates similar dimensions of high dimensional data and processes the similar dimension with an attention mechanism. [88] is a follow-up work that combines the Perceiver architecture and masked-prediction objective. However, the key difference is that the ”perceiver” computes similar- ity between dimensions on a single example signal. For image data, this is essentially clustering pixels based on the pixel intensity of color in a signal image. On the other hand, URLOST computes similarity over statistics of the distribution of the signal. Additionally, ”perceiver” implicitly aggre- gates similar dimensions, while URLOST explicitly aggregates similar dimensions and recovers the original topology of the signal. token in NLP[21; 63; 8], solving pre-text tasks, predicting masked patches, or building contrastive image pairs in computer vision [48; 34; 87; 12; 33; 90]. These SSL methods have demonstrated potential in cross-modal learning, such as the CLIP by Radford et al. [64]. The success of most current state-of-the-art self-supervised representation learning methods relies on the assumption that the data has known stationarity and domain topology. In this work, we explore unsupervised representation learning under a more general assumption, where the stationarity and topology of the data are unknown to the machine learning model and its designers. We argue that this is a general and realistic assumption for high-dimensional data in modalities of natural science. We propose a novel unsupervised representation learning method that works under this assumption and demonstrates our method’s effectiveness and generality on a synthetic biological vision dataset and two datasets from natural science that have diverse modalities. We also perform a step-by-step ablation study to show the effectiveness of the novel components in our model. be sub-optimal and could be improved. Learning the clusters end-to-end with the representation via back-propagation is worth investigating in the future. Additionally, the computational costs of computing pairwise mutual information and clustering both scales quadratically with the number of dimensions of the signal. Although each procedure only needs to be performed once, the computa- tion could become too slow and infeasible for extremely high-dimensional data (d > 10, 000) with our current implementation as shown in Appendix A.9. In the Appendix, we provide a benchmark of the runtime of our implementation. Nevertheless, we think this problem could be resolved by a GPU implementation for clustering and mutual information. Additionally, although natural science datasets such as TCGA and V1 calcium imaging are considered large datasets in their respective domains, they are still small compared to the datasets used in computer vision. We expect larger natural science datasets to emerge as measuring and imaging technology advances. Additionally, adapting URLOST to support contrastive learning objectives, such as SimCLR, presents another intriguing direction. 7 ACKNOWLEDGEMENT We would like to specially thank Bruno Olshausen for suggesting key prior work on computational models and biological evidence for recovering the 2D topology of natural images. We also appreci- ate the helpful suggestions from Surya Ganguli and Atsu Kotani. Published as a conference paper at ICLR 2025 ', 'original_lines': 'Topology in biological visual signal. 2-D topology of natural images is strong prior that requires many bits to encode [19; 3]. Such 2-D topology is encoded in the natural image statistic [64; 36]. Kohonen in 1982 first come up with self-orgnaizing map (SOM). The algorithm produces a low- dimensional representation of a higher-dimensional dataset while preserving the topological struc- ture of the data [39]. It is also motivated to solve the “unscramble” pixels problem by “descram- bling” by mapping pixels into a 2D index set by leveraging the mutual information between pixels at different locations. More detail in Appendix A.12. [61] tackles the same problem with manifold learning. The community tried to feed the “recovered topology” to a graph neural network (GNN) [6], but suffered from inherent scalability issues on using GNN to do unsupervised learning. Optic and neural circuits in the retina result in a more irregular 2-D topology than the natural image, which can still be simulated [60; 54; 55; 53; 70; 38]. This information is further processed by the primary visual cortex. Evidence of retinotopy suggests the low-dimensional geometry of visual input from 9 Under review as a conference paper at ICLR 2025 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 retina is encoded by the neuron in primary visual cortex [48; 26; 35; 25; 74; 56]. These study provide evidence that topology under retinal ganglion cell and V1 neurons can be recovered. organizing maps (SOM) as a computational model for V1 functional organization [22; 69; 1; 23; 47; 39]. In other words, this idea of self-organizing is a principle governing how the brain performs com- that the brain’s self-organizing mechanisms continue after full development [29; 62; 37]. like 2d convolution and attention [6; 45; 17; 27]. Due to the natural of graph neural network, they often only work on limited data regime and do not scale to large data [63; 9; 8]. However, recent development in [31] shows this direction is prominent. They successfully scales GNN to ImageNet. as [15; 14; 28]. Ma et al. [41] treats an image as a set of points but relies on 2D coordinates. URLOST employs a single mutual information graph to define the topology of the high-dimensional signal. Gao et al. [28], on the other hand, is designed to handle graph data, where each data point corresponds to a distinct graph. It segments a graph into ”subgraphs,” processes them with a GNN, the ”patch resizer” mechanism from FlexiViT used in Beyer et al. [4] . token in NLP[20; 58; 7], solving pre-text tasks, predicting masked patches, or building contrastive image pairs in computer vision [44; 33; 78; 11; 32; 80]. These SSL methods have demonstrated potential in cross-modal learning, such as the CLIP by Radford et al. [59]. The success of most current state-of-the-art self-supervised representation learning methods relies on the assumption that the data has known stationarity and domain topology. In this work, we ex- plore unsupervised representation learning under a more general assumption, where the stationarity and topology of the data are unknown to the machine learning model and its designers. We argue that this is a general and realistic assumption for high-dimensional data in modalities of natural science. We propose a novel unsupervised representation learning method that works under this assumption and demonstrates our method’s effectiveness and generality on a synthetic biological vision dataset and two datasets from natural science that have diverse modalities. We also per- form a step-by-step ablation study to show the effectiveness of the novel components in our model. During experiments, we found that clustering is crucial for the quality of representation learning. be sub-optimal and could be improved. Learning the clusters end-to-end with the representation via back-propagation is worth future investigation. Additionally, the computational costs of computing pairwise mutual information and clustering both scale quadratically with the number of dimensions of the signal. Although each procedure only needs to be performed once, computation could be- come too slow and infeasible for extremely high-dimensional data (d > 10, 000) with our current implementation. We provide a benchmark on the runtime of our implementation in the Appendix. Nevertheless, we think this problem could be resolved by a GPU implementation for clustering and mutual information. Additionally, although natural science datasets such as TCGA and V1 calcium imaging are considered large datasets in their respective domains, they are not even close to the scale of popular domain data like computer vision. We expect larger natural science datasets will emerge as measuring and imaging technology advances. It’s a great direction to perform URLOST on even larger datasets. Under review as a conference paper at ICLR 2025 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 ', 'after_paragraph_idx': 35, 'before_paragraph_idx': 35}, {'section': 'Abstract', 'after_section': None, 'context_after': 'For CIFAR10, we use K = 20, α = 0.5 and β = 2. We set the number of clusters to be 64. For V1 neural recording, we use K = 15, α = 0 and β = 1. We set the number of clusters to 200. For TCGA dataset, we use K = 10, α = 0 and β = 1. We set the number of clusters to 32. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'We also provide the exact hyperparameters we used for the experiments in this paper: the parameter of URLOST MAE is the same as MAE except for the specific hyper-parameter in the method section. ', 'modified_lines': '', 'original_lines': ' 19 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'simpler benchmarks. This suggests that scaling to larger and more complex datasets is a promising direction. Our study explores the novel settings of unsupervised representation learning without knowing the data topology and stationarity, demonstrating the effectiveness of our method across ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '15.6% 5.9% ', 'modified_lines': '', 'original_lines': 'Table 7: Runtime of each module during the forward operation. We report the runtime of each module during inference for a single example. We use the model trained on permuted CIFAR10 dataset. The experiment is performed using a single RTX 2080TI and is averaged over 500 trials. Module Cluster Indexing Self Organizing Layer Encoder Decoder Runtime 0.27 ms 1.11 ms 17.5 ms 20.4 ms A.10 SCALABILITY Scalability is not the primary focus of this work. In general, unsupervised representation learning benefits from scaling to a larger amount of data, and the MAE model, which our method is built upon, demonstrates a level of scaling [33]. We believe URLOST inherits this potential for the fact that it shows stronger advantages on more challenging tasks, as seen with CIFAR-10 compared to ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 ADDITIONAL RELATED WORKS', 'after_section': None, 'context_after': 'to find someone rewired your optic nerve (or you have been implanted with a prosthetic retina). The signals from the retina to the brain are intact, but the wires are all mixed up, projecting to the wrong places. Can the brain learn to “descramble” the image?” Kohonen proposes the “self- Figure 10: The effect of additive noise on pairwise mutual information. We plot averaged mutual information between pair of pixels. We aggregate all pairs with same distance. Each curves represent a specific noise level. Although the curve becomes lower as noise increase, the decaying features of the curve remain the same. scrambling” pixels, SOM maps each pixel into a 2D index set by leveraging the mutual information between pixels at different locations. The parameters of SOM include a set of neurons, where each neuron has a location index. Let Wv denote the weight vector of the vth neuron and D the input ', 'paragraph_idx': 35, 'before_section': None, 'context_before': 'A.12 SELF-ORGANIZING MAP (SOM) ', 'modified_lines': 'Kohonen in 1982 presents a fascinating thought experiment [43]: “Suppose you woke up one day 23 Published as a conference paper at ICLR 2025 Table 7: Runtime of each module during the forward operation. We report the runtime of each module during inference for a single example. We use the model trained on permuted CIFAR10 dataset. The experiment is performed using a single RTX 2080TI and is averaged over 500 trials. Module Cluster Indexing Self Organizing Layer Encoder Decoder Runtime 0.27 ms 1.11 ms 17.5 ms 20.4 ms organizing map” (SOM) algorithm to address this problem. SOM is an unsupervised machine learn- ing technique used to produce a low-dimensional (typically two-dimensional) representation of a higher-dimensional dataset while preserving the topological structure of the data. In the task of “de- ', 'original_lines': 'Kohonen in 1982 presents a fascinating thought experiment [39]: “Suppose you woke up one day organizing map” (SOM) algorithm to address this problem. SOM is an unsupervised machine learn- ing technique used to produce a low-dimensional (typically two-dimensional) representation of a higher-dimensional dataset while preserving the topological structure of the data. In the task of “de- 22 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 Under review as a conference paper at ICLR 2025 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'fold learning to uncovered topology 2-d topology from natural images. However, when the intrinsic dimension of the manifold is larger than 2d, it is difficult to integrate the ”uncovered” topology with state-of-the-art self-supervised learning algorithms. A.13 VISUALIZING THE WEIGHT OF SELF-ORGANIZING ', 'paragraph_idx': 8, 'before_section': '1 INTRODUCTION', 'context_before': 'dimensional, such as s = (i, j), and Wv(s) will learn to represent a set of “descrambled” pixels, where s represents the correct indices of the pixels. In other words, the index set defines the correct topology of the image as a 2D grid, and SOM maps the pixels onto this 2D topology by leveraging ', 'modified_lines': 'the mutual information between pixels at different locations. Other methods such as [67] use mani- 24 Published as a conference paper at ICLR 2025 ', 'original_lines': 'the mutual information between pixels at different locations. Other methods such as [61] use mani- ', 'after_paragraph_idx': None, 'before_paragraph_idx': 8}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Figure 11: Visualize the weight of the self-organizing layer after applying inverse permutation. A snapshot of E(i)T W (i) is shown at different training epoch. The number of epochs is shown on the top row. Each figure shows one column of the weight of the self-organizing layer, at different ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'which confirms our hypothesis. As training goes on, the pattern E(i)T W (i) becomes more and more visually similar, which implies the model learns to gradually learn to align the input clusters. ', 'modified_lines': '', 'original_lines': '23 Under review as a conference paper at ICLR 2025 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-03 05:53:50
ICLR.cc/2025/Conference
BkrbtKMneg
NVsLE7s8tY
[{'section': 'Abstract', 'after_section': None, 'context_after': 'Published as a conference paper at ICLR 2025 Figure 2: The overview framework of URLOST. The high-dimensional input signal undergoes ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '2 ', 'modified_lines': ' ', 'original_lines': 'EncEncℎ!ℎ"DecEnc"""!ℎBiological Vision SystemData w/ stationarity and topology Data w/o stationarity and topology… many more high -dimensional datain natureℎℎ?URLOSTNew ParadigmJoint EmbeddingMAE ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Published as a conference paper at ICLR 2025 on specific dataset and is defined in the experiment section. α, β and K are hyper-parameters. Set- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '3 ', 'modified_lines': ' ', 'original_lines': 'Raw SignalAligned ClustersSignal ClusteringSelf-organizingLayerEncDecMaskedAutoencoderRe-organizingLayerReconstructedSignal ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'On topology of the domain of signals. A signal is a function X(s) defined on some domain S, where s ∈ S. Let’s focus on S. While the definition of topology is a little abstract and relies on open sets, the intuition is that it defines a generalized sense of nearness, or whether two points are ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Published as a conference paper at ICLR 2025 ', 'modified_lines': '', 'original_lines': 'Figure 8: Foveated retinal sampling (A) Illustration of a Guassian kernel shown in [14]. Diagram of single kernel filter parameterized by a mean µ′ and variance σ′. (B) the location of each Gaussian kernel is summarized as a point with 2D coordinate µ′. In total, the locations of 1038 Gaussian kernels are plotted. (C) The relationship between eccentricity (distance of the kernel to the center) and radius of the kernel is shown. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'integer valued ”time” but can instead take values that are multidimensional vectors in Rn, points on some manifolds and Lie Groups, or graphs [81; 50; 77; 42; 65; 5]. Let’s consider a real-valued random field X(s) defined on a homogeneous space S =s of points s equipped with a transitive ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'the index was originally and frequently interpreted as time, it was generalized to random fields, where the index can be an arbitrary domain. That is, by modern definitions, a random field is a generalization of a stochastic process where the underlying parameter need no longer be real or ', 'modified_lines': '', 'original_lines': ' 21 Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'organizing map” (SOM) algorithm to address this problem. SOM is an unsupervised machine learn- ing technique used to produce a low-dimensional (typically two-dimensional) representation of a higher-dimensional dataset while preserving the topological structure of the data. In the task of “de- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'to find someone rewired your optic nerve (or you have been implanted with a prosthetic retina). The signals from the retina to the brain are intact, but the wires are all mixed up, projecting to the wrong places. Can the brain learn to “descramble” the image?” Kohonen proposes the “self- ', 'modified_lines': '', 'original_lines': ' 23 Published as a conference paper at ICLR 2025 Table 7: Runtime of each module during the forward operation. We report the runtime of each module during inference for a single example. We use the model trained on permuted CIFAR10 dataset. The experiment is performed using a single RTX 2080TI and is averaged over 500 trials. Module Cluster Indexing Self Organizing Layer Encoder Decoder Runtime 0.27 ms 1.11 ms 17.5 ms 20.4 ms Figure 10: The effect of additive noise on pairwise mutual information. We plot averaged mutual information between pair of pixels. We aggregate all pairs with same distance. Each curves represent a specific noise level. Although the curve becomes lower as noise increase, the decaying features of the curve remain the same. ', 'after_paragraph_idx': 2, 'before_paragraph_idx': None}]
2025-03-21 07:33:07