Pages

Avalanche of Useless Science

In a post last winter, I discussed whether papers that receive few or no citations are worthwhile anyway. I came up with a few reasons why they might be worthwhile, and noted that the correlation between number of citations and the "importance" of a paper may not be so great.

In an essay in The Chronicle of Higher Education this week, several researchers argue that "We Must Stop the Avalanche of Low-Quality Research". In this case, "research" means specifically "scientific research".

How do they assess what is low-quality ("redundant, inconsequential, and outright poor") research? They use the number of citations.

Uncited papers are a problem because "the increasing number of low-cited publications only adds to the bulk of words and numbers to be reviewed."

There you go! A great reason to turn down a review request from an editor:

Dear Editor,

I am sorry, but I am going to have to decline your request to review this manuscript, which I happen to know in advance will never be cited, ever.


Sincerely,


CitedSciProf

What if a paper is read, but just doesn't happen to be cited? Is that OK? No, it would seem that that is not OK:

"Even if read, many articles that are not cited by anyone would seem to contain little useful information."

Ah, it would seem so, but what if the research that went into that uncited paper involved a graduate student or postdoc who learned things (e.g., facts, concepts, techniques, writing skills) that were valuable to them in predictable or unexpected ways? Is it OK then or is that not considered possible because uncited papers must be useless, by definition? This is not discussed, perhaps because it is impossible to quantify.

The essay authors take a swipe at professors who pass along reviewing responsibilities: "We all know busy professors who ask Ph.D. students to do their reviewing for them."

Actually, I all don't know them. I am sure it happens, but is it necessarily a problem? I know some professors who involve students in reviewing as part of mentoring, but the professor in those cases was closely involved in the review; the student did not do the professor's "reviewing for them". In fact, I've invited students to participate in reviews, not to pass off my responsibility, but to show the student what is involved in doing a review and to get their insights on topics that may be close to their research. It is easy to indicate in comments to an editor that Doctoral Candidate X was involved in a review.

Even so, the authors of the essay blame these professors, and by extension the Ph.D. students who do the reviews, for some of the low-quality research that gets published. The graduate students are not expert reviewers and therefore "Questionable work finds its way more easily through the review process and enters into the domain of knowledge." In fact, in many cases the graduate students, although inexperienced at reviewing, will likely do a very thorough job at the review. I don't think grad student reviewers contribute to the avalanche of low-quality published research.

So I thought the first part of this article was a bit short-sighted and over-dramatic ("The impact strikes at the heart of academe"), but what about the practical suggestions the authors propose for improving the overall culture of academe? These "fixes" include:

1. "..limit the number of papers to the best three, four, or five that a job or promotion candidate can submit. That would encourage more comprehensive and focused publishing."

I like the kernel of the idea -- that candidates who have published 3-5 excellent papers should not be at a disadvantage relative to those who have published buckets of less significant papers -- but I'm not exactly sure how that would work in real life. What do they mean by "submit"? The CV lists all of a candidate's publications, and the hiring or promotion committees with which I am familiar pick a few of these to read in depth. The application may or may not contain some or all of the candidate's reprints, but it's easy enough to get access to whatever papers we want to read.

I agree that the push to publish a lot is a very real and stressful phenomenon and appreciate the need to discuss solutions to this. Even so, in the searches with which I have been involved, candidates with a few great papers had a distinct advantage over those with many papers that were deemed to be least-publishable units (LPU).

I think the problem of publication quantity vs. quality might be more severe for tenure and promotion than for hiring, but even here I have seen that candidates with fewer total papers but more excellent ones are not at a disadvantage relative to those with 47 LPU.

2. "..make more use of citation and journal "impact factors," from Thomson ISI. The scores measure the citation visibility of established journals and of researchers who publish in them. By that index, Nature and Science score about 30. Most major disciplinary journals, though, score 1 to 2, the vast majority score below 1, and some are hardly visible at all. If we add those scores to a researcher's publication record, the publications on a CV might look considerably different than a mere list does."

Oh no.. not that again. The only Science worth doing will be published in Science? That places a lot of faith in the editors and reviewers of these journals and constrains the type of research that is published.

I have absolutely no problem publishing in a disciplinary journal with impact factor of 2-4. These are excellent journals, read by all active researchers in my field. It is bizarre to compare them unfavorably with Nature and Science, as if papers in a journal with an impact factor of 3 are hardly worth reading, much less writing.

3. ".. change the length of papers published in print: Limit manuscripts to five to six journal-length pages, as Nature and Science do, and put a longer version up on a journal's Web site."

I'm fine with that. It wouldn't have any major practical effect on people like me who do all journal reading online anyway, but for those individuals and institutions who still pay for print journals, this could help with costs, library resources etc.


Let's assume that these "fixes" really do "fix" some of the problems in academe -- e.g., the pressure to publish early and often -- so what then?

"..our suggested changes would allow academe to revert to its proper focus on quality research and rededicate itself to the sober pursuit of knowledge."

Maybe that's my problem: I enjoy my research too much and forgot what an entirely sober pursuit it should be. I guess the essay authors and I are just not on the same page.