The reviews are in
The reviews are in (well, actually they’re not, but they’re not due for a few more days). By which I mean peer reviews. I’m serving on a panel that has to rank a number of proposals and thought I would describe my usual process. I’m curious to know how others do this as well.
One part of the peer review process is that it’s supposed to be confidential. This means that you don’t disclose the contents of proposals or even who else is on the panel. (Sometimes panel membership is made public after the fact.) I’m not sure this is always a good policy, but I’m abiding by it here and not discussing which program I’m reviewing for.
Usually the first thing I do is quickly skim all the proposals. I try to read them in somewhat-random order, not sorted by PI name or submission date. Although it’s pretty common to refer to a proposal by the PI’s last name when discussing it, I’m going to try not to do that this time, as a reminder to myself that we’re judging the ideas, not the people. While reading I make notes in the PDF and usually type a quick summary.
After a first read-through, I then try to sort the proposals into related groups. This is not necessarily to judge them against each other; it’s more that reading through all of the proposals about a specific topic together makes it easier to keep the issues in working memory.
Finished reading and sorting, it’s time to look at the program’s specific criteria. What is the funding agency/telescope/etc looking for and how important is each criterion? Often programs will ask for a numerical score in a number of categories; if they don’t I might make one up, since it makes things easier to sort. With the criteria in mind I read through the proposals a second time and try to score them. I sort the proposals by score and see if I’m happy with the relative ranking; if not, I adjust if I can figure out how.
For programs where there’s going to be an interactive (in-person, telecon, etc) discussion, I’d usually stop there and submit scores — most times when I’ve done this kind of thing, the scores have to be submitted in advance. I know that scores can and will be adjusted after the discussion (here is an interesting paper on how much scores change; sorry it’s paywalled) so I don’t try to finesse things too much. If there is no planned discussion, I would probably go through another round of read-and-score after a few days’ break.
Then there is the actual discussion, where I try to be both ready to support my own views on a particular proposal and willing to change my mind if someone else presents better evidence. This is the really hard part of peer review in my experience — it’s not that people on a panel are nasty or unprepared, it’s that most of what we review is very good, and knowing that many good proposals will not be selected is No Fun. It’s rewarding in one sense, in that you get to read about lots of interesting ideas, but it can be depressing in an overall state-of-the-field sense.
The final step in this process is usually writing some kind of feedback to the proposer. My experience is that people usually only read these if their proposal is not selected, so it’s important to give reasons why it wasn’t selected. Sometimes this is easy, for example if there is a fatal flaw in an argument, but more often it’s much more subtle: something the proposer might have thought was perfectly clear isn’t, or they missed some key criterion, or just didn’t explain the idea well enough. Giving good feedback is really tricky and I would not claim to have mastered the art.
That’s my personal “art of panel reviewing” in a nutshell. And now I have to get back to it; those proposals won’t read themselves!