In a report dated June 12, 2008, the International Mathematical Union (IMU), in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS), express concern about overreliance on journal impact factors in decisions about library subscriptions and granting of tenure.
From the report's Executive Summary:
Using citation data to assess research ultimately means using citationâ€?based statistics to rank thingsâ€”journals, papers, people, programs, and disciplines. The statistical tools used to rank these things are often misunderstood and misused.
â€¢ For journals, the impact factor is most often used for ranking. This is a simple average derived from the distribution of citations for a collection of articles in the journal. The average captures only a small amount of information about that distribution, and it is a rather crude statistic. In addition, there are many confounding factors when judging journals by citations, and any comparison of journals requires caution when using impact factors. Using the impact factor alone to judge a journal is like using weight alone to judge a person's health.
â€¢ For papers, instead of relying on the actual count of citations to compare individual papers, people frequently substitute the impact factor of the journals in which the papers appear. They believe that higher impact factors must mean higher citation counts. But this is often not the case! This is a pervasive misuse of statistics that needs to be challenged whenever and wherever it occurs.
â€¢ For individual scientists, complete citation records can be difficult to compare. As a consequence, there have been attempts to find simple statistics that capture the full complexity of a scientist's citation record with a single number. The most notable of these is the hâ€?index, which seems to be gaining in popularity. But even a casual inspection of the hâ€?index and its variants shows that these are naÃ¯ve attempts to understand complicated citation records. While they capture a small amount of information about the distribution of a scientist's citations, they lose crucial information that is essential for the assessment of research.
The Wall Street Journal, in a June 16 article covering the report, gives an example from the authors' field:
Mathematicians are particularly vulnerable to quirks in the impact factor, Mr. Ewing said, noting that his colleagues are more likely to cite older work â€” while impact factors use citations within two years of publication â€” and that in general they cite much less than other scientists. â€œSome dean somewhere in a small university might conclude biologists are six times as smart as mathematicians,â€? Mr. Ewing said, adding that it makes about as much sense as ranking peopleâ€™s popularity by the number of people whose hands theyâ€™ve shaken.Posted by stemp003 at June 20, 2008 4:06 PM