Exploring new ideas, messing around with disciplinary boundaries, making unusual and innovative connections – surely that’s what cutting edge research is supposed to be about these days? Certainly it’s something many researchers aspire to – at least on those grant proposals where “interdisciplinary”, “Multidisciplinary” and even “transdisciplinary” are essential buzz-words. But let me tell you, it can make a journal editor’s life a misery.
Take this example: In my role as coordinating editor of the Journal of Nanoparticle Research I get more than my fair share of papers with titles something like this dropping into my in-box: “A novel, green approach to synthesizing really fancy nanoparticles using extract of lesser spotted purple wort”. It’s a fictitious title, but not that far removed from some of the papers I receive.
It sounds really interesting in principle – combining as it does aspects of chemistry, materials science, botany, and probably half a dozen other disciplines, to arrive at a biological route to manufacturing nanoparticles that have probably previously only been synthesized through some messy route using buckets of nasty chemicals.
But here’s the snag when you get to peer review – you need peers to review the research. And inevitably there are only a handful of people in the whole world who have done similar research – and they are usually all co-authors on the paper you are trying to get reviewed!
So as a journal editor you have a problem – who on earth do you get to review the work before publication?
I can’t be alone in struggling with this. Papers published within clear-cut disciplines usually have a sizable pool of peers to call on for the review and publication process to work. But the more that disciplines are mixed and matched in research, the smaller that pool of qualified experts becomes. Until you hit the ultimate in transdisciplinary research, and find yourself with a pool of one.
In the long run, there are going to have to be ways found around the peer review challenges that interdisciplinary research presents. For top tier papers it probably isn’t too much of an issue – here there is a pool of interdisciplinary experts willing to give up their time to review truly groundbreaking research. But for the vast majority of publications, I suspects it’s becoming an increasing problem.
There have been some ideas bandied around – the use of social media and on-line paper ranking systems for instance (PLoS One for example allows papers to be rated and commented on). And Christopher Lee at UCLA has posted a couple of pieces on-line on new models of interdisciplinary peer review.
But there’s not a lot out there as far as I can see.
In the meantime, we struggle on. In the case of the papers on novel nanoparticle synthesis routes (of which I am well on the way to becoming an expert in!), I keep asking away until I get enough reviews of sufficient quality to make a decision on submissions. But it takes a long time for peer reviews like this to be completed – especially when you wait a month, just to get back a review along the lines of “this is a very nice paper” (I kid you not – this is not an uncommon – albeit totally unacceptable – review). It also takes its toll on the editor who ends up spending hours scanning the literature for possible reviewers. The result is extremely long review times, and an increased chance of either dodgy work being published, or innovative research being rejected.
I must confess this is more of a gripe blog than a “here’s a solution” blog. But I am interested to know how many others out there are struggling with this, or have come up with tentative solutions to the “peer review on a pool of one challenge”.
Andrew, it seems to me the easiest way around this sort of situation is through it. Taking your example fictional paper title above, one would find an materials scientist to examine the nanoparticle synthesis parts of the paper and then one would find a botanist to review the botanical aspects. A paper like that is likely to come from a group of investigators, each with expertise in primary disciplines in one area or another, not ones with hybrid specializations. This sort of paper could only have been written by a materials scientist and a botanist, and maybe a chemist thrown in for good measure, and not a “botananobiotechnologist” or some such because such a thing does not exist, nor should it. This is the very reason why the founders of the institute that I work for have avoided confirming degrees in nanotechnology or nanobiotechnology and instead want PhDs trained in ONE primary discipline such as chemical engineering or physics or biology or whatever AND then supplementary training in nanoscience at a very high level. They want researchers who can work at the interface of disciplines yet be considered fully trained experts who specialized in one major area. In this way, their work will not get bogged down in peer-review. It can easily be judged on its merits in MANY DIFFERENT FIELDS because those disciplines are fully expressed and explored in the work. Respectfully, your friend in Baltimore MAS.
Thanks Mary Anne.
I’m with you on the need to build from foundational areas in science rather than get caught up in the latest hybrid fad. However, the fact that research that is pushing the boundaries of knowledge is increasingly working at the intersections of foundational disciplines does raise review challenges that are not easily parsed out – in part because the whole is (if it is done well) more than the sum of the parts here, and without a full perspective on cross-disciplinary research, something of the value and significance is missed.
In the case of peer review, a major hurdle is the aim of publishing research that makes a significant contribution to the state of the science. A paper may be scientifically sound, and yet make a negligible contribution to the state of the science, and this is something that it is extremely hard to assess if reviewers don’t have a good handle on the field of publication.
In terms of using reviewers from different areas of expertise, it’s an attractive idea, and one that I have tried – but with little success. The problem is, potential reviewers under these circumstances will either reject the request as they consider the paper to be outside their area, approach the paper from the peculiarities of their own discipline in terms of conventions, expectations etc., or provide trivial comments because they struggle to address the parts of the work they are familiar with in isolation. And at the end of the day, none of the reviewers are able to assess whether the work makes a significant contribution to the state of the science – because they are not qualified to make this call.
It’s a tough one!
What about asking the folks who submit the papers to also provide a list of suitable peers to review it while making it clear best friends/spouses/lovers/folks who are likely to say only nice things about you aren’t on that list? It really shouldn’t be too onerous – people doing research know who else is doing research in their own/similar fields. It would be a start at least, even if those suggested weren’t the ones who ultimately reviewed the paper.
That’s the common practice for most journals. But, as you can imagine, the authors will still present their friends/spouses/lovers/folks in spite of a clear NO in the policy
Unfortunately I don’t have any brilliant solutions to the peer review dilemma, but I agree that it is a problem. A couple months ago I attended a talk by Bruce Alberts (Biochemistry Prof at UCSF, Editor-in-Chief of Science Magazine, U.S. Science Envoy) where he brought up some shortcomings of the peer review process. He lamented how the current system rewards conservative research because reviewers (especially with multidisciplinary papers) aren’t familiar enough with the topic to favorably review outside-the-box papers.
Looking at the opposite side from the editors, there is a danger for researchers who are publishing papers where there is only a potential review pool of one. I’ve seen it where labs will be very prolific in publishing papers on a certain topic, so prolific in fact that nobody else is doing research along those same lines. Then they get into a situation where they are only citing their own papers, and nobody else is citing them. The lack of outside citations kills their Thomson ISI score.
As a way of solving the “peer review on a pool of one challenge”, my vote is for the PLoS One strategy of allowing people rate and comment on papers. I think some sort of ranking system for the commenter’s would be needed though, so that high marks from a highly rated commenter would count for more than high marks from an unranked commenter. Yet I fear this would eventually run into the same ‘lack of qualified reviewers’ problem as the current peer review process.
As @AtheneDonald has just pointed out on twitter, the problem also applies to reviewing of grants. I’ve sat on grants boards and been dismayed at how the interesting ones at the boundary of two disciplines almost always get shot down by reviewers whose expertise lies in just one of those disciplines. It’s a real disincentive to doing innovative studies.