More about the total mess that is the standard academic publication system

A while ago I wrote a post about the broken-ness of the current standard publication model for academic research – write a manuscript, submit to a journal, have the journal send your manuscript to about three somebodies who by someone’s standard have some sort of “expertise” in something related to some part of what you wrote, have those somebodies give a thumbs up or down along with minimal comments that may or may not be supported by evidence or even citations, repeat until you publish or give up. Paul reminded me of that post when he sent me this Note from the Editor of the American Political Science Review.

The note’s behind a stupid pay wall, so let me quote the relevant parts:

…we have two main observations. First, although the discipline as a whole is less fragmented than we had feared…some subfields—and you know who you are—continue to be riven by ideological or methodological conflict. Too often, a paper in one of those fields draws recommendations of “reject,” “minor revision,” and “accept” from three equally esteemed referees. When reviewers diverge so greatly, the editorial team’s judgmental burden increases significantly, compelling editors to discount some expert advice (if possible, without antagonizing reviewers) in order to provide coherent advice to authors. We have no answer to this puzzle, but simply note a pattern accurately described by one co-editor: increasing engagement across sub-disciplines, sustained fratricide within some.

Second, we have become painfully aware of how badly (or how little) some of our colleagues read. Articles are too often cited, by authors and by referees, as making the exact opposite of the argument they actually advanced. Long books are noted, with a wave of the rhetorical hand but without the mundane encumbrance of specific page or even chapter references; and highly relevant literatures, even in leading political science journals, are frequently ignored. We may have fallen victim to an occupational disease of editors, but we have often found ourselves moaning, “Doesn’t anybody read anymore?” It is cold comfort that this sloppiness extends well beyond political science. A recent study has shown that, even in “gold standard” medical research, articles that clearly refute earlier findings are frequently ignored, or even cited subsequently as supporting the conclusion they demolished.

I quote the first paragraph just to underline some of the stuff I wrote about in my original post. In a discipline where three people who are supposedly equal in the competence to review a piece hand down wildly divergent reviews – and that is, I think – a commonplace occurrence in nearly all the social science disciplines – the solution is to get a whole lot more than three reviewers. I especially find it interesting that editors at a journal like the APSR that is known for its emphasis on statistical findings feel it is appropriate to base evaluations of contributions on a sample size of 3.

But it was the second paragraph that Paul pointed out to me as the more interesting one, and I agree. I remember discussing a body of research with some colleagues in grad school. At one point in the conversation, one colleague exasperatedly declared: “I’ve been going through these books and articles and looking up all the references they use to support their arguments, and none of the stuff they cited says what they say it says!” (This colleague was more meticulous about her research than I or anyone else in our cohort was or ever will be, so it didn’t surprise any of us that she had gone back and looked up every citation).

I’m embarrassed to admit how many times I let myself write a manuscript or report that at some point said something like “Research has shown that [argument x] and [argument y] ([citation 1, citation 2, citation 3 …, citation n]).” It’s a bad practice but it’s common. I try very hard to avoid that sort of thing now.

But I found it particularly interesting that the APSR editors felt that claiming a study supports an assertion when in fact it refutes it is a symptom of people not reading the study. I don’t doubt that that actually happens, but it seems to me that it’s entirely as plausible that people honestly see support for their arguments within the stuff they read. It’s really, really easy to interpret the exact same set of findings many different ways, even if those findings are presented clearly. When the findings are bogged down in disciplinary jargon or hidden behind statistical-significance stars, that misinterpretation is practically guaranteed.

That’s why a small set of reviewers doesn’t ensure that quality analyses get published. It doesn’t matter if they’re an “expert in their field,” especially in disciplines where fields are so idiosyncratically defined as to ensure only sporadic overlap across researchers. They’re not experts in a body of literature that is so broad and so diverse in its style and scope that no individual or small subset of individuals can hope to come across even a substantial minority of it within their professional lifetime. They’re not experts in honestly assessing whether they’ve honestly assessed an argument with which they disagree.

The best way to keep someone from saying a piece of research says something it doesn’t really say is to have them say it in public. Then someone else who has read the same research can disagree with them. I’d actually feel a lot better about the whole three-reviewers publication model – even with anonymous reviewers, which I think is completely unnecessary – if the editors made the reviewers comments public and allowed a comment period before making a decision about the paper, and if they appended a complete open and free (no pay wall) link to the original reviews and the comments the print and electronic versions of everything they eventually decided to publish.

That’s the essence of what I was getting at in the last post I wrote on this subject. No editor or reviewer or any individual knows enough of the literature and has a good enough grasp of the current state of all a discipline’s subdivisions and understands enough of all the available methods and can interpret enough of all the available theory and jargon to be able to decide whether a piece of research is “quality” research. No editor or reviewer or any individual is competent to decide whether a piece of research is valuable or interesting. A more reasonable publication model would accept what is already true – that each reader decides those things for him or herself no matter what and editor or reviewer says – and give people the pre-publication comments to serve as tools in deciding whether an argument deserved to be believed.

Advertisements

6 thoughts on “More about the total mess that is the standard academic publication system

  1. I agree with your points about citation practices, both in the format (“Research has shown that [argument x] and [argument y] ([citation 1, citation 2, citation 3 …, citation n]).”) and in the explanation (people make connections to their own ideas, not just sloppy reading). Is there an “inspired by” latin abbreviation or is “cf” the closest we can get? Anyway, I comment because I am writing grants and feel pressured (by sense of what is necessary rather than anyone’s explicit advice) to reproduce the “Research has shown that [argument x] and [argument y] ([citation 1, citation 2, citation 3 …, citation n])”. Do you have any thoughts on that? Is it okay in the grant context though not in publications? Do you think that the journals themselves expect/encourage that format?

    Also, I feel you on the arbitrariness of review panels, the solution of adding more reviewers, and the idea of publishing feedback. I don’t always read the responses in publications that include them, but I like having them available (and it seems like the opportunity to click-through-if-you’re-interested because it’s the internet is particularly well suited here).

  2. I’d say as a general rule there are very few situations (I can’t think of any off the top of my head), where it’s ok to say that “research says” anything. Maybe if all you’re trying to argue is that other people have said something – you’re just trying to substantiate the pure fact of words having been written by multiple people – then maybe…but it seems even in that case, you’d be better off quoting them to substantiate your assertion.

    The thing is, “research shows” doesn’t mean anything. If research really has shown something, then we ought to be able to state what data the researchers collected, how they analyzed it, what they found, and why that shows what we say it shows. In most cases, if that kind of 1-2 sentence summary can’t establish the fact that the research supports our assertions, then that means the research probably isn’t worth citing.

    That being said, the last two proposals I wrote for an NSF grant were full of “research shows” sentences. I’m not proud of that. Then again, I’m not proud that in the first of those grants I took out a paragraph of methodology in order to pay lip-service to a couple specific scholars because one of the reviewers said he/she wouldn’t approve the proposal if I didn’t mention those people. The silliness of the grant system is something that will have to wait for a future post.

    Regarding review panels: I don’t think the solution is just to have the comments out there, although even just that would be a step in the right direction. As I argued earlier (https://houseofstones.wordpress.com/2012/06/13/we-dont-need-better-research-we-need-more-research-with-search-options/), if those comments were tagged with meta-data and people were allowed to search comments using that data, then people wouldn’t need to slog through all the trash comments to get to the ones that were genuinely helpful and important.

  3. Pingback: Social scientists sometimes have kind of a weird view of their own relevance | House of Stones

  4. Pingback: ☆ The Story of an Article « Mostly physics

  5. I agree with your critiques about the journal peer review process. Anonymity in the review process is indeed pointless – we end getting so specialized in our work that we already know each others research programs anyways, and if we don’t initially recognize the research, we can usually figure it out via citation patterns. As well, that other problems arise with small panels of judges, where arbitrary oversights get inflated. To that end, I’ve had the experience of receiving an email from a “notable person in the field” with whom I’ve never interacted before, but who coincidentally was the editor of a journal I had recently submitted an article too, who commented that he’d recently “discovered” my work and found it interesting, but …but that I didn’t cite them enough (though I did cite several of their papers, hence emphasis on “enough”). After what I thought was a collegial exchange though, long after (6 months later), my paper was rejected and some if the comments were vile. Gee, I wonder who wrote them? I have a pretty good idea, given that “coincidental” exchange. What’s gained by a system that allows this to happen? My work was not fairly evaluated, and I was insulted too; did this person really think I wouldn’t put two and two together? And if they knew I would, what could they possibly have thought would be gained by the process? Did they think they were teaching me some sort of lesson? I’d love to see some changes to the system that ensure both quality evaluations and fair evaluations. We really should create a system that reflects the same kind of caution we use in our research to protect ourselves from bias.

  6. Erica,

    Yuck. I’ve had some bad interactions with reviewers, but never anything that bad. Unfortunately, I think anonymity breeds bad behavior. There’s no reason to not act shamefully if no one is likely to find out it was you who did the acting. Also, public reviews can be publicly responded to, not only by the original author, but other interested parties. I think people in general tend to be more careful about the quality of what they write when they know their reviews will also be reviewed.

Comments are closed.