One purpose of this communication is to record my forewarning concerning the possible promiscuous and careless use of quantitative citation data for sociological evaluations, including personnel and fellowship selection. In particular, I wish to disassociate myself from such abuse of citation data recently imputed to my by Swanson. He erroneously stated that in my 1955 paper in Science 1 I claimed one could measure the importance of a paper by citation counting. Citation counting is an old technique and has been criticized for many reasons by Brodman, Raisig, and others. Impact is not the same as importance or significance. There is no specific correlation between the number of papers published by an individual and the quality or importance of his work, though Price has indicated that scientists who produce work of high quality usually have a high publication rate. We can confirm this and add the observation that their papers usually are also cited more frequently than the average.
[...]
Citation indexes can be used to facilitate personnel and fellowship evaluation simply because they provide more convenient access to the literature. Citation indexes synthesize a consensus of scientific opinion needed in a careful appraisal of research, whether for editorial refereeing, making awards, or selecting personnel. It is preposterous to conclude blindly that the most cited author deserves a Nobel prize. On this basis, LYsenko and others might have been judged the greatest scientists of the last decade. Such quantitative data can have significance for the historian who can
carefully evaluate all the data available. Surely, the history of science must record the controversial as well as the non-controversial figure. However, the mere ranking by numbers of citations or the numbers of papers published is no way to arrive at objective criteria of importance.