In total we have 4 quotes from this source:

 These developments go hand in...

These developments go hand in hand with the rise of open access (OA) publishing. Though primarily motivated by the research and societal benefits that will accrue from freeing the dissemination of the research literature, open access is also needed to optimise crowd-sifting of the literature by making it accessible to everyone. But the growth of open access is also being held back by the leaden hand of the impact factor. This year has seen several significant policy developments in the US, EU and UK, but we still have a considerable way to go. In the long term open access can only work by moving to a gold ‘author pays’ model that has to be funded by monies released from subscription cancellations, but while we continue to place false value in impact factors, the publishers of high ranking journals can claim that the cost of sifting and rejecting scores of manuscripts must be borne by the system and therefore warrants exorbitant charges for gold OA.

#open-access  #access  #publishing  #open-access-publishing 
 If you use impact factors you are statistically illiterate

I don’t wish to under-estimate the difficulties. I am well aware of the risks involved, particularly to young researchers trying to forge a career in a culture that is so inured to the impact factor. It will take a determined and concerted effort from those in a position of influence, not least by senior researchers, funders and university administrators. It won’t be easy and it won’t be quick. Two decades of criticism have done little to break the addiction to a measure of worth that is statistically worthless.

But every little helps, so, taking my cue from society’s assault on another disease-laden dependency, it is time to stigmatise impact factors the way that cigarettes have been. It is time to start a smear campaign so that nobody will look at them without thinking of their ill effects, so that nobody will mention them uncritically without feeling a prick of shame.

So consider all that we know of impact factors and think on this: if you use impact factors you are statistically illiterate.

  • If you include journal impact factors in the list of publications in your cv, you are statistically illiterate.
  • If you are judging grant or promotion applications and find yourself scanning the applicant’s publications, checking off the impact factors, you are statistically illiterate.
  • If you publish a journal that trumpets its impact factor in adverts or emails, you are statistically illiterate. (If you trumpet that impact factor to three decimal places, there is little hope for you.)

If you see someone else using impact factors and make no attempt at correction, you connive at statistical illiteracy.

#factors  #cigarettes  #assault  #publications  #risk 
 The way impact factor works

The impact factor might have started out as a good idea, but its time has come and gone. Conceived by Eugene Garfield in the 1970s as a useful tool for research libraries to judge the relative merits of journals when allocating their subscription budgets, the impact factor is calculated annually as the mean number of citations to articles published in any given journal in the two preceding years.

By the early 1990s it was clear that the use of the arithmetic mean in this calculation is problematic because the pattern of citation distribution is so skewed. Analysis by Per Seglen in 1992 showed that typically only 15% of the papers in a journal account for half the total citations. Therefore only this minority of the articles has more than the average number of citations denoted by the journal impact factor. Take a moment to think about what that means: the vast majority of the journal’s papers — fully 85% — have fewer citations than the average. The impact factor is a statistically indefensible indicator of journal performance; it flatters to deceive, distributing credit that has been earned by only a small fraction of its published papers.

But the real problem started when impact factors began to be applied to papers and to people, a development that Garfield never anticipated. I can’t trace the precise origin of the growth but it has become a cancer that can no longer be ignored. The malady seems to particularly afflict researchers in science, technology and medicine who, astonishingly for a group that prizes its intelligence, have acquired a dependency on a valuation system that is grounded in falsity. We spend our lives fretting about how high an impact factor we can attach to our published research because it has become such an important determinant in the award of the grants and promotions needed to advance a career. We submit to time-wasting and demoralising rounds of manuscript rejection, retarding the progress of science in the chase for a false measure of prestige.

#factors  #library  #number 
 Peer-review vs crowdsourcing in scientific publishing

Writing in 2008, Campbell (albeit somewhat uncertainly) saw a possible solution to the impact factor conundrum in the rise of mega-journals like PLoS ONE, which publish exclusively online and judge papers only on their novelty and technical competence, and in the potential of article-level metrics to assess the scientific worth of papers and their authors. In the end, however, he couldn’t shake the editorial habit of selection, writing of the contents of archives and mega-journals: “nobody wants to have to wade through a morass of papers of hugely mixed quality, so how will the more interesting papers […] get noticed as such?”

Four years later such views are being buffeted by the rising tides of open access and social media. It might sound paradoxical but nobody should have to wade through the entire literature because everybody could be involved in the sifting.

The trick will be to crowd-source the task. Now I am not suggesting we abandon peer-review; I retain my faith in the quality control provided by expert assessment of manuscripts before publication, but this should simply be a technical check on the work, not an arbiter of its value. The long tails of barely referenced papers in the citation distributions of all journals — even those of high rank — are evidence enough that pre-publication peer review is an unreliable determinant of ultimate worth.

Instead we need to find ways to attach to each piece of work the value that the scientific community places on it though use and citation. The rate of accrual of citations remains rather sluggish, even in today’s wired world, so attempts are being made to capture the internet buzz that greets each new publication; there are interesting innovations in this regard from the likes of PLOS, Mendeley and altmetrics.org.

#social-media  #publications