Writing in 2008, Campbell (albeit somewhat uncertainly) saw a possible solution to the impact factor conundrum in the rise of mega-journals like PLoS ONE, which publish exclusively online and judge papers only on their novelty and technical competence, and in the potential of article-level metrics to assess the scientific worth of papers and their authors. In the end, however, he couldn’t shake the editorial habit of selection, writing of the contents of archives and mega-journals: “nobody wants to have to wade through a morass of papers of hugely mixed quality, so how will the more interesting papers […] get noticed as such?”
Four years later such views are being buffeted by the rising tides of open access and social media. It might sound paradoxical but nobody should have to wade through the entire literature because everybody could be involved in the sifting.
The trick will be to crowd-source the task. Now I am not suggesting we abandon peer-review; I retain my faith in the quality control provided by expert assessment of manuscripts before publication, but this should simply be a technical check on the work, not an arbiter of its value. The long tails of barely referenced papers in the citation distributions of all journals — even those of high rank — are evidence enough that pre-publication peer review is an unreliable determinant of ultimate worth.
Instead we need to find ways to attach to each piece of work the value that the scientific community places on it though use and citation. The rate of accrual of citations remains rather sluggish, even in today’s wired world, so attempts are being made to capture the internet buzz that greets each new publication; there are interesting innovations in this regard from the likes of PLOS, Mendeley and altmetrics.org.