I recently had occasion to use OpenReview as the reviewing platform for a conference. The conference was the International Conference on Learning Representations (ICLR), a core Machine Learning conference. I used it both as a reviewer and as an author. OpenReview is a platform that is distinguished by two features:
- It facilitates dialog between the authors and the reviewers during the peer-review process rather than the typical process of one-shot rebuttal submitted by the authors to the reviews, or the really entrenched, authors just see the reviews and the final decision.
- It makes public the reviews and (in the case of ICLR) the dialog between the authors and the reviewers. These are kept available for the long term.
My limited experience with OpenReview has been largely positive. My takeaway is that this is a positive direction to make the gold standard of academic research, peer reviewing, even better. In this article, I distill a few things that are particularly appealing about this experience. These may nudge us in other technical communities, like my communities of systems and dependability, to adopt this, or a similar, reviewing platform. I also point out a few rough edges in this reviewing mode.
What I Like About The Concept
- The open dialog improves the understanding of the work. The fact that the dialog is archived means others outside the reviewing circle get to benefit from the work put into the reviewing process. I have sometime benefited as a reader, and benefited greatly from the summaries and the back-and-forth between the reviewers and the authors. The broader question that this mode satisfactorily addresses is how do we spread more widely the fruits of labor of the PC [1]. Collectively the PC members spend many countless hours on reading and reviewing submissions to a conference. It is a shame that for most of our conferences, such valuable technical material stales into oblivion, hidden away on review sites.
- The open dialog improves the quality of the work, but for a subsequent submission. In principle, the decision on the current submission is to be made based on what was submitted at the deadline, and not based on additional experiments or writing, through the OpenReview process. This is also mostly followed in practice too. What this means is that as an author, my submission will be more polished the next time around as I distill the lessons from the dialog.
- The open dialog improves the quality of the reviews. This is acknowledged by most in the systems community where we have had rebuttals of reviews by authors for quite a while — that rebuttals have a “side benefit” of improving the quality of the reviews. I would hesitate to turn in a half-baked review based on a quick read of the paper if I know that I can be called by the authors, sometime in strident tones. And doubly, that calling out will be visible to the entire PC. The OpenReview system super-charges that effect by having the authors rebut repeatedly (I know that sounds more aggressive than I mean it to) to multiple rounds of comments from the reviewers. So most reviewers try to dig a little bit deeper to give constructive feedback. Misunderstandings get clarified through the dialog. And all this makes folks a better reviewer.

Some Rough Edges
- This concept does not necessarily improve the quality of the decision making. This means good submissions still get rejected and undeserving submissions still get through. My hypothesis, unproven and possibly unprovable, is that this happens at about the same rate as for the conferences that do not use an open reviewing system. This happens because reviewers are humans too, to state the obvious. We take positions and take pride in appearing resolute in our judgments. Thus, we hardly ever change our views on a submission, and correspondingly, the numerical scores, despite the dialog with the authors.
- This mode increases the load on reviewers. The reviewers are now expected to engage over an extended period of time and at unscheduled points in time (i.e., on a continual basis) on the submission, with authors and with other reviewers and area chairs. For many of us who already juggle a few too many PC memberships, this stretches us. Consequently, some valuable PC members may decide to sit out from a conference, which would be a lost opportunity.
- This is a down-in-the-weeds rough edge. The platform does not keep a nice threading of the reviewer comments and author responses and revisions of the paper. Thus, it is difficult when there are multiple rounds of back and forth and multiple revisions uploaded, to keep track of where we are. A platform that does this kind of threading well is Slack and maybe the developers of OpenReview can take a leaf out of their book.
To Sum
Openness is a bedrock of research. We publish our discoveries and want these to be accessible as widely as possible. We provide artifacts with our publications (software prototypes, hardware designs, etc.) that allow others to replicate our results and to build on our discoveries. This is how it should be, this is hallowed ground for academic science and engineering.
Counter-intuitively reviews for our work have not been available openly so far. Some of it is for understandable reasons — foremost, we want to preserve the anonymity of the reviewers. However, overall the current status has been a lost opportunity. We can relatively easily make available to the wide technical community the reviews on the submissions. We should also enable dialog between the authors and the reviewers and when things have stabilized, make the dialog available for all to view. This will improve reviews, improve the quality of the science and the engineering, and at a grandiose level, allow the broader public a bit of a peek behind the curtains of how science and engineering discoveries are made. That is win, win, and win.
[1] Every conference has a PC, a term that stands for the Program Committee. The PC has researchers who are considered experts in the field of the conference. The PC members read and review the papers and make decisions on which submissions will be accepted and which will be rejected. There is a PC chair (or sometimes two Co-Chairs) who guide the process along.