Insights from the ICLR Peer Review and Rebuttal Process
Abstract
The study analyzes the ICLR 2024 and 2025 peer review processes, focusing on score dynamics and reviewer interactions, using LLM-based text categorization to identify trends and factors influencing score changes.
Peer review is a cornerstone of scientific publishing, including at premier machine learning conferences such as ICLR. As submission volumes increase, understanding the nature and dynamics of the review process is crucial for improving its efficiency, effectiveness, and the quality of published papers. We present a large-scale analysis of the ICLR 2024 and 2025 peer review processes, focusing on before- and after-rebuttal scores and reviewer-author interactions. We examine review scores, author-reviewer engagement, temporal patterns in review submissions, and co-reviewer influence effects. Combining quantitative analyses with LLM-based categorization of review texts and rebuttal discussions, we identify common strengths and weaknesses for each rating group, as well as trends in rebuttal strategies that are most strongly associated with score changes. Our findings show that initial scores and the ratings of co-reviewers are the strongest predictors of score changes during the rebuttal, pointing to a degree of reviewer influence. Rebuttals play a valuable role in improving outcomes for borderline papers, where thoughtful author responses can meaningfully shift reviewer perspectives. More broadly, our study offers evidence-based insights to improve the peer review process, guiding authors on effective rebuttal strategies and helping the community design fairer and more efficient review processes. Our code and score changes data are available at https://github.com/papercopilot/iclr-insights.
Community
The paper analyze the ICLR 2024–2025 peer review process, examining scores, reviewer–author interactions, and rebuttal effects using quantitative and LLM-based methods. The findings show initial scores and co-reviewer ratings strongly predict score changes, while thoughtful rebuttals can meaningfully improve outcomes for borderline papers.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- What Drives Paper Acceptance? A Process-Centric Analysis of Modern Peer Review (2025)
- ReviewerToo: Should AI Join The Program Committee? A Look At The Future of Peer Review (2025)
- ReviewGuard: Enhancing Deficient Peer Review Detection via LLM-Driven Data Augmentation (2025)
- LLM-REVal: Can We Trust LLM Reviewers Yet? (2025)
- Paper Copilot: Tracking the Evolution of Peer Review in AI Conferences (2025)
- Gen-Review: A Large-scale Dataset of AI-Generated (and Human-written) Peer Reviews (2025)
- From Authors to Reviewers: Leveraging Rankings to Improve Peer Review (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper