Skip to main content

Preliminary reflections on whether reviews of professional work should include comments on grammar and style (#1219)

Topics/tags: Academia, SIGCSE TS, rambly

Disclaimer: This musing represents my rough and preliminary thoughts on a topic. As the note in the footer suggests, they are in no way intended to represent policies or practices of the Association of Computing Machinery’s Special Interest Group on Computer Science Education (ACM SIGCSE), the corresponding SIGCSE Technical Symposium, or the various leaders associated with SIGCSE or the Technical Symposium.

How to begin? Where to begin? When to begin?

I am a computer scientist. I am also a computer science (CS) educator. Perhaps the order is the other way around. Some of my scholarship is in the area of computer science education (CSEd). Most of it.

I love what I do. And I’ve been successful in my career, at least according to the measures of success that I care about. But I also know that both CS and CSEd have practices that complicate professional reviews, particularly reviews for tenure and promotion. Or perhaps it’s academia as a whole that complicates things.

Here’s the issue at hand or at least the backstory to the issue at hand. In many subfields of computer science, conference publications serve as the archival form of research. I haven’t spent enough time looking at other areas to see what they do, but I think it’s generally less complex than what we do in CS. Computer science has a model in which top conferences primarily use double-anonymous review procedures (i.e., authors are anonymous to reviewers, reviewers are anonymous to each other) and have a comparatively low acceptance rate (30% or below, some as low as 5–10%). Why conferences rather than journals? Because the turnaround time is much faster for conferences.

It’s a decent, but not ideal, situation, at least in terms of supporting, assessing, and sharing research. To ensure timeliness, most conferences (but not all) have only a single-pass reviewing system. Authors submit their papers. Reviewers provide feedback and ratings. The Program Chairs or their proxies select the papers with the best reviews. Authors are expected to fix anything the reviewers suggest, but no one really checks [1]. So the multiple rounds of improvement associated with a journal article aren’t there, at least for submissions that are accepted the first time [2].

It’s different enough from what happens at, say, a Chemistry conference or a Sociology conference that some institutions and some personnel committee members are uncomfortable treating conference publications in computer science as equivalent to journal publications or monographs in other fields. As I said before, I’m not sure what the submission processes for their conferences are like, but it sounds like they are much less restrictive.

To further complicate matters, many institutions fail to understand or acknowledge the vital service of reviewing papers.

In CSEd, we also face the additional challenge that many people undervalue disciplinary education research, even at institutions that claim to prioritize education. Not enough people have read Boyer’s Scholarship Reconsidered, or at least not enough have embraced it.

Why am I telling you all of this? Because I’ve been thinking about reviewing for the SIGCSE Technical Symposium, which serves as my primary academic home. I’ve reviewed for the SIGCSE TS and other conferences for about thirty years. I’ve been a meta-reviewer [3,4] for at least a decade. I’m taking on other leadership roles, too.

In all this time, I had thought we had an appropriate system. I haven’t always been happy when I’ve had papers rejected, finding that some reviewers seem to have misunderstood my work or brought in extraneous issues. But I’ve had similar experiences whenever I’ve been reviewed, whether for conferences, journals, or grants. And I’ve taken my roles seriously.

This past year, things blew up in the SIGCSE TS community because of issues in reviewing. It started with some inappropriate reviews in the panels track, which, unlike the main paper track, does not have anonymous reviews; to assess a prospective panel, you need to know who is on the panel and what perspective they will bring. But you should not call out irrelevant aspects of a panelists’ identity.

My goal in this musing is not to reflect on what went wrong in those reviews nor how we not only ensure that it never happens again. I won’t consider deeper underlying issues of bias in reviewing. Those are issues that both the SIGCSE leadership and the SIGCSE community are working to address. Rather, my goal is to consider an issue that came up during the broader discussions of reviewing for the SIGCSE TS [5,7]; it has connections to issues of inclusion and bias, but I think it’s not something that immediately comes to mind when we talk about those issues.

In providing feedback to authors, some reviewers give detailed (some would say overly detailed) comments on issues of grammar and style. I believe that those reviewers intend to be helpful; since there’s only one round of feedback, they want to help ensure that the final paper is as polished as it can be.

It sounds okay, doesn’t it?

Not necessarily.

For one thing, SIGCSE is an international community. For a non-trivial proportion of the SIGCSE population, English is a second language, or a third, or beyond. Too many comments on grammar and style can imply to such authors that they don’t belong.

During the conversations on reviewing, I learned that excessive comments on grammar and style could also deter researchers from groups historically excluded from computing or CSEd. The feedback is not taken as a sign that I care enough about your work that I want to help you present it in the best light but rather You don’t seem qualified to write; you don’t belong [8].

I used to be a reviewer who provided excessive feedback. Hell, I used to be a professor who offered excessive feedback. My students used to joke that there was typically more red ink [9] than black on pages once I was done editing them. But over the past few years, my amazing colleagues in the writing lab have helped me learn that, well, less is more. Students are more likely to read a few pointed comments than a giant list of corrections. Ideally, they’ll learn enough from those comments to make the other corrections themselves. Too many comments, and they’ll just panic. In the end, our goal is to help them get their ideas across, and if they get their ideas across, it’s not essential that their writing be perfect.

Mine certainly isn’t.

I’ve been better in my reviews of late. But that may also be because I’m writing fewer reviews and more meta-reviews; meta-reviews don’t generally speak to grammar and style. I haven’t written the dreaded this paper needs careful review by a native English speaker in years. I should never have written it. What was I thinking?

But I’m left with the question of what we should do about submissions with more than a few grammatical mistakes. Is I could understand the points enough? In some ways, it is. After all, we care about the results; the presentation should be secondary [10], provided it conveys information sufficiently well.

Unfortunately, there’s a voice in the back of my head, a voice that likely comes from a place of privilege, a voice that suggests that the quality of writing matters. Why? In part, it comes back to the question of faculty reviews (as opposed to paper reviews). When we’re trying to convince a tenure committee that a SIGCSE TS publication is valuable, and they pull up some random SIGCSE TS papers and find poor writing, we may find that our case has been undermined [11].

If the SIGCSE TS had alternatives to a one and done review system, it might be easier to address these issues. For example, we could have a category for great work; presentation needs improvement or some such, and give people some time to clean them up. We might even identify people to work with authors to improve the writing n such cases [12].

That’s a complex enough change that it may take a few years to implement. And we should consider other options. I wonder what else we might do. Since some of this issue involves potentially conflicting values, I wonder what the community values more.

I should review my notes from the discussions so far, at least the notes I can find. And I look forward to in-person (and virtual) discussions at SIGCSE TS 2023 (in about two weeks) [14]. I wonder if the topic of reviewing for grammar and style will come up.


[1] Authors must also meet formatting and page-limit guidelines. The associated publishing company does check those issues.

[2] Some submissions that are rejected fade off into nothing. Others get revised and resubmitted to another conference.

[3] Also known as an Associate Program Chair or APC.

[4] APCs lead discussion amongst reviewers, provide summary reviews, and recommend acceptance or rejection based on the consensus of the reviewers.

[5] Many additional issues came up: bias against or for certain kinds of research, reviewers who don’t know the review criteria well, the lack of compensation for reviewers, how people from different groups receive or interpret reviews [6], more.

[6] If you are from a group historically excluded from computing or CSEd, you may take negative comments as a further sign that you don’t belong. Conversely, if you are part of a majority group, you can often ignore negative comments or at least let them slide off you. Many of those from majority groups, myself included, don’t seem to have thought deeply enough about how the tone or tenor of negative reviews can further exclude people.

[7] I believe that the SIGCSE leadership plans to address broader issues. I don’t know how long it will take. I expect that issues of bias and the experience of those from historically excluded groups are at the front of the queue.

[8] That description is inadequate. I hope it still gets the point across.

[9] Or purple or green, which I used when I switched to friendlier colors.

[10] However, exceptional presentation improves even mediocre content.

[11] By we, I mean members of a Computer Science department.

[12] Yay! More volunteers are needed.

[14] Hmmm … I don’t see an explicit session for the discussion of reviewing on the agenda. That puzzles me. There are, however, some workshops by great people on peer reviewing. I hope that I can find a way to attend one of those.


Version 1.0 released 2023-03-01.

Version 1.0.1 of 2023-03-02.