A few months ago, I got into a disagreement with an acquaintance over whether the Internet was going to destroy science. His disquiet was sparked by open access, the longstanding movement toward making scientific research freely available to all. Unlike earlier Internet intellectual property disputes, it’s the institutions that are leading this fight. In February, the University of California, which encompasses numerous significant research institutions, announced it would drop its $11 million subscription to Elsevier’s journals. Consortia research institutions in Norway, Germany, and Hungary have made similar announcements. Cost is definitely one issue: university libraries’ spending on journal subscriptions have quintupled since 1986, according to the Association of Research Libraries.
The bigger issue is the business model. Most research is publicly funded, and the journals fund neither research nor writing and reviewing, and not even editing in some cases, though they do have other overheads. Yet anyone who wants to read this publicly funded work must pay to do so. Increasingly, this is resented as double-dipping: publicly funded research should be readily available to the people who’ve paid for it. The crucial sticking point in these contracts, therefore, is requiring open access, often because research funders such as the US National Institutes of Health are writing it into their grants.
In open access, researchers pay up-front publishing fees, but thereafter the work is free for all and sundry to access, either because the publishing journal is itself open access (“gold”) or because the paper is deposited in a pre-print server once it’s been accepted (“green”). Contrary to my acquaintance’s complaint, however, there appears to be no reason the open access model wouldn’t support the same quality of peer review as commercial academic publishing. Greater access ought, if anything, to improve it.
There are many other problems that commercial academic publishing has failed to solve. The emphasis on publishing novel results has led to a well-documented replication crisis. Replication is meant to be the bedrock of science; we trust results because others are able to rerun the same experiment and get the same results. Even worse than the inability of replication efforts to find publication is the very high number of papers, particularly in medicine, that are later retracted, as the Retraction Watch website has shown. If open access improves speed of publishing and transparency that should help ongoing efforts to remediate them.
The big sticking point so far is reputation: every scientist wants their work to appear in the most prestigious journals, the ones with long histories of being trusted. In this sense, what Elsevier or Springer provides is imprimatur, not process. Yet my friend’s original concern that academic publishing will be, like Amazon’s ebooks list, overrun by fakes is not entirely misplaced. It is already long true that anyone publishing in academic journals soon finds their email inbox filled with invitations to speak at fake conferences and submit their work to fake journals. I published one article in 2013 on the history of data retention in the UK in IEEE Security & Privacy and for two years afterwards was invited to speak at (mostly Chinese) conferences on topics like cell biology and physics. My academic friends tell me they get a steady stream of this stuff. This trend is fuelled by incentives such as promotion and tenure, not by the business model of academic publishing; so is the culture of putting out “least publishable unit” papers.
In the last week my Twitter feed has brought me two new more alarming trends. The first is the increasing use of automated software to check papers for plagiarism. It’s understandable why the journals might do this; plagiarism is a problem in academia at all levels (though the more interesting variant at the top level is self-plagiarism, where scientists repeatedly copy from their own work). The discussion on Twitter, however, surfaced myriad instances of flawed technology worth mocking: the checking ‘bots flag frequently-used citations, standard explanations of standard protocols, and even the authors’ affiliations. Since the papers have to pass the ‘bot to get to the human editors, the result is automated rejections and frustration for all concerned.
Even more concerning is the study a friend posted on Twitter that finds that exposure on Twitter leads to high citation levels among papers in the hot social media field of … coloproctology. The paper concludes that medical communities should encourage all concerned to publicise their work on social media. My conclusion is that we should get these folks back into reading journals. Scientific merit can’t be allowed to become a popularity contest.
Scientific publishing is going to have to change; scientists themselves need faster and better access to each other’s work. And there are doubtless areas where automation can help — for example, in updating meta-analyses and trials as new data comes in. Peer review and replication remain the bedrock of science; we need new and better ways to implement them.
Twitter thread on automated submission rejection: twitter.com/jfbonnefon/status/1140946785474633729
“The impact of social media on citation rates in coloproctology”: onlinelibrary.wiley.com/doi/10.1111/codi.14719#.XQtQzVwX2N0.twitter