Saturday, November 07, 2015

Review of "The tools and techniques of the adversarial reviewer"

This is my review of the paper
"How to review a paper \\ The tools and Techniques of the adversarial reviewer"
by Graham Cormode.

This paper appeared in the SIGMOD Record in December of 2008, but appears not to have gone through proper peer review. The paper suffers from at least three major problems

Motive  - is it really an interesting problem that reviewers are adversarial? Surely if reviewers colluded with the authors, we'd end up accepting all kinds of rubbish,  swamping our already bursting filing cabinets and cloud storage resources further, and taking cycles away from us just when we could be updating our blog or commenting on someone's Facebook status.
Is the fact that a reviewer doesn't like a paper a problem? Do we know that objective knowledge and reasoning based on the actual facts are the best way to evaluate scholarly work? Has anyone tried random paper selection to see if it is better or worse?

Means - the paper doesn't provide evidence to support its own argument While there is much anecdote, there are no data. The synthetic extracts from fictional reviewers are not evaluated quantitatively - e.g. to see which are more likely to lead to a paper rejection -- for example, it is not even shown that perhaps accepted papers may have more adversarial reviews than rejected papers, which may attract mere "meh" commentary.

Missed Opportunity - the paper could have a great opportunity to publish the names of the allegedly adversarial reviewers together with examples of their adverse reviews, to support the argumentation, and to allow other researchers to see if the results are reproducable, repeatable, and even useful.
For example, multiple programme committees could be constituted in parallel, and equipped with versions of reviewing software that modify reviews to include more or less adversarial comments. The outcomes of PC meetings could generate multiple conference events, and the quality of the different events compared. If particular outcomes can be determined to be superior, then the review process could subsequently be fully automated. It is only a small step from there to improving the automatic authoring of the papers themselves, and then the academic community will be relieved of a whole slew of irksome labour, and can get on with its real job.

Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home