The past decade saw an exponential rise in the number of academic articles published annually. At the same time, librarians and academics began to question traditional measures of research quality, such as the journal impact factor and citation counts, for being unreliable and slow to accumulate. Although in the past these measures have helped librarians filter for quality content, serving as an indication of the value of journal titles, they show weakness when applied to the rapidly evolving scholarly publication marketplace now in place. Neither can be applied easily to nontraditional scholarly outputs such as working papers, technical reports, datasets, and conference presentations.
The increase in open access (OA) publications makes research easier to access than ever before. Mega-journals such as PLoS ONE and Sage Open publish more articles in a day than some journals do in a year. The sheer volume of available scholarship is enough to make your head spin.
Given these challenges, how can we librarians help people access what they seek while at the same time make our own jobs easier as we comb through the ever-rising sands of available scholarship?
Enter altmetrics, a new approach to determining the quality and popularity of research more quickly than ever before. To understand altmetrics, start with the Altmetrics Manifesto (altmetrics.org/manifesto), which explains how value can be assessed by tallying online shares, saves, reviews, adaptations, and social media usage related to research outputs of all kinds—not only traditional publications but also gray literature, digital scholarship, research blogs, datasets, and other modes of scholarly communication. When paired with usage statistics (downloads and page views) and traditional measures of impact (journal impact factors and citation counts), they can be an excellent way to help sift through high-quality and popular search results to zero in on what patrons seek.
This article will cover the advantages and disadvantages of both new and traditional research metrics with an aim to help you understand how you can use them to filter out the noise to better find what you seek. I’ll start with the traditional measures, and then move on to alternatives.
Journal Impact Factor
ISI (now Thomson Reuters) created the Journal Impact Factor (JIF) in the 1960s as a shorthand measure of quality to allow scholars to understand the value of content published in a journal relative to other journals in a particular field. It represents the average number of times that an article published in a particular journal has been cited within the previous 2 years.
For many years, the JIF was the best, most objective tool available to determine the prestige of a journal. It allowed librarians to understand who the most authoritative publishers were in fields where they might not have domain knowledge. Librarians could also use it to teach the concepts related to information evaluation during instruction sessions to undergraduates. Junior faculty used it to understand where to publish in order to advance their careers. The JIF was (and is) a powerful tool when used correctly.
However, since the 1980s many have questioned the supremacy of JIFs as the de facto measure of research quality on two fronts—gaming and granularity. Over the years, reports have surfaced of editorial boards requiring authors to cite articles previously published in their journal in order to inflate the total number of citations received, thereby increasing their JIFs. Other critics have pointed out that JIFs are only an approximation of quality and that true measures of an article’s quality should be determined by an article-level metric such as a citation count.
Citation Counts
Citation counts are the total number of citations an article receives, usually tracked by a service such as ISI’s Web of Science or Scopus. Generally speaking, the higher the number of citations, the greater the perception of quality for that article. Citations are, after all, the most important currency that scholars use to acknowledge their intellectual forebears. When filtering through search results on a database, it is useful to sort results by citation counts to understand which publications are the most highly regarded on a particular topic.
Such techniques should be used sparingly, however. Articles receive citations for a number of reasons, including vanity (self citations), politics (honorary citations for a well-respected scholar), and refutation (positing that the original author’s hypothesis is incorrect). A salient recent example might be the “arsenic life” article (“A Bacterium That Can Grow by Using Arsenic Instead of Phosphorus”) published in the June 2011 issue of Science, where nearly all citations received to date have been from scientists disputing the hypothesis of the original article [13].
Another drawback to citation counts is their speed of accumulation. Citations do not accrue as quickly as other measures of impact due to the medium in which they appear. Scholarly articles take, on average, a year to make it from submission to publication. Considering that citations measure the number of mentions in others’ publications, it can take as long as 2 years from submission to see the first citations. Some argue that is not fast enough given the speed of communication enabled by the internet. Such turnaround times were acceptable in the days of yore when print journals were the norm, but they are no longer appreciated by researchers accustomed to immediate gratification.
Citations also rarely apply to nontraditional forms of scholarly communication such as preprints, technical reports, conference presentations, posters, and datasets. Though these outputs can be cited, few have associated permanent identifiers such as DOIs that allow citations to be tracked. These citations are also generally not included in databases such as ISI Web of Knowledge or the ACM Digital Library.
Finally, using citation counts as a mechanism to filter content can be challenging for librarians without access to subscription databases. It is much easier to search the web for a journal’s JIF and proxy an article’s quality that way, rather than hunting down individual citation counts as they might appear on a publisher’s website.
Luckily, studies show that instantaneous and freely available measures of quality such as usage statistics and altmetrics can be indicators for the probability of future citation counts.
Usage Statistics
For research metrics, the term “usage statistics” usually refers to page views and full-text download counts of content hosted in institutional repositories or on publisher websites. However, the term also describes search queries, clicks, and requests for access to particular pieces of a larger whole of online content, as well as top referring URLs and time spent on particular webpages.
Studies show that page views by expert evaluators correlate to quality assessments [6] and that, generally speaking, downloads have a strong and consistent correlation with the size of the audience [2]. Still other researchers have found some degree of correlation between PDF downloads and citations [10].
Usage statistics’ correlation to traditional impact indicators may not be as important as their ability to show the use of scholarship outside of the bounds of academia. Usage statistics, unlike JIFs and citations, can measure an article’s use not only by scholars but also by a lay audience. For example, referring URLs for articles published by the scientific journal PeerJ show that many readers make their way to the journal via popular news sites such as Slashdot and The Economist. These connections are what make usage statistics—as well as altmetrics—a valuable addition to the suite of impact metrics used to determine the most important (and interesting) research being published.
Altmetrics NO LONGER A FAD
Though once called a fad, altmetrics is rapidly gaining traction as a supplemental measure of quality for scholarship. A number of studies have shown that scholars are increasingly using the social web to share and discover research. It follows that the ways in which they share, discover, and annotate others’ research should be studied to track research impact. A recent article by Priem, Piwowar, and Hemminger (2012) proposes that “citation and altmetrics indicators track related but distinct impacts, with neither able to describe the complete picture of scholarly use alone.” [10] As discussed previously, altmetrics can also show the impact of research outside of the academy. By tracking how scholarship is shared and discussed in real time, the gap between publication and citation can be filled.
What metrics make up altmetrics overall? Search for an authoritative list of altmetrics measures and you will come up empty-handed. In the rapidly changing online environment, websites and services can gain—and lose—popularity overnight, meaning that there will never be a canonical list of web metrics that comprise altmetrics overall. On the flip side, that means altmetrics are flexible and adaptable to the changing needs of scholars and the public alike; it can tell us a lot about the nature of the research we come upon in the course of our searches.
In Table 1, I have provided a nonexhaustive list of categories and examples of altmetrics measures, accompanied by a description of how the measures are generally used. Generally speaking, there are five types of altmetrics—shares, saves, reviews, adaptations, and social usage statistics. The web services used to illustrate the various types of altmetrics generally either fall into the categories of social media, where research is linked to for the purposes of sharing, saving, or reviewing (Twitter, Facebook, Mendeley, Reddit, F1000) or content platforms, where research outputs are uploaded by their creators (Figshare, Slideshare, Dryad, Github).
RESEARCH QUALITY
What do altmetrics tell us about research quality? Generally speaking, the presence of enough metrics for a research article can indicate that research is considered “quality.” However, the viral nature of the web can lead to extremes in altmetrics counts, which have led some to make the distinction between two types of research: “scholarly” and “sexy.”
While librarians are familiar with scholarly research, one might wonder what “sexy” research is. Here’s a good example: In October 2009, PLoS ONE had the dubious honor of publishing one of the most popular research articles in recent memory, a study on bat fellatio. To date, the piece has garnered more than 250,000 views and 9,000 shares, yet has been cited only six times. Sometimes, there is a clear delineation between “sexy” research that is popular with the public and “scholarly” research that is well-respected by other researchers but generally uninteresting to those outside of the academy. More often, the two types of research overlap. Well-respected “scholarly” research can capture the public’s imagination, resulting in popular success.
Sometimes, popularity can indicate future scholarly citations. There have been many studies done to date that point out the correlation between various altmetrics measures and JIFs and citations, as shown in Table 2. Some altmetrics measures can tell us in minutes what it takes citations months or years to tell us—the popularity of research among other scholars.
One new entrant into the altmetrics arena is Plum Analytics (plumanalytics.com). Although not yet widely used—it’s a subscription-based service—it seems oriented toward research analytics for university administration (measuring faculty output) and is not integrated into existing search services or publisher websites.
IMPROVING TRADITIONAL SEARCH HABITS
How can altmetrics be used to improve traditional search habits? The first and most obvious benefit of altmetrics is the speed with which they accumulate. Armed with the knowledge that certain types of altmetrics measures correlate with citation counts, librarians who are helping people find recently published research will be able to confidently recommend certain articles over others, given their altmetrics counts.
Altmetrics also offer something that citation counts cannot: contextualized metrics. While rote counts of citations do little to help the end-user understand whether an article is high-quality, altmetrics can offer context through the wonders of text mining. Though still in its infancy, contextualized altmetrics services that support search could become the Next Big Thing, as they can instantly weed out the articles that are being referenced because of their low quality.
For certain fields that tend to rely less on journal articles—communities of practice, in particular—altmetrics can help zero in on the quality of content, agnostic of format. It can be difficult to determine the value of scholarship presented in working papers or datasets due to a lack of traditional signifiers of quality. Altmetrics for scholarly content in unconventional formats can help end users better understand whether that research is worthwhile. Similarly, altmetrics can apply not only to scholarship but also to researchers, departments, universities, and even nations to help determine the top experts on any given subject.
LIMITATIONS OF ALTMETRICS
Altmetrics are not perfect by any stretch of the imagination. As a relatively new type of research metric, there are still some issues that the field will need to address in order for altmetrics to become more widely adopted.
First, altmetrics providers need to develop a way to differentiate between scholarly and sexy research. Contextualized altmetrics services are quite new and have not yet been refined. No standards exist for reporting altmetrics; one imagines that a standard will need to be developed to help quickly determine whether a popular piece of scholarship is also high-quality research.
Altmetrics are not currently as user-friendly as the JIF. Critics note that in lacking a single number, rating, or score, altmetrics require scrutiny and interpretation that can be burdensome to end users.
Other critics point out that the ease with which altmetrics can be tallied is also its biggest weakness, as social media metrics and usage statistics are particularly vulnerable to gaming. Automated download bots can generate thousands of download and page view requests in minutes. Tweets, Facebook posts, and blog mentions can be bought. Though publishers and service providers are working to block gaming attempts, there is not yet a neutral auditing organization such as COUNTER (projectcounter.org) that can ensure altmetrics’ quality.
Finally, altmetrics do not apply as readily to traditional works such as books or art. When searching for works in these mediums, the option of using altmetrics to supplement search techniques may not apply.
ACCESSING ALTMETRICS FOR RESEARCH
Altmetrics cannot yet be applied to the search process in a manner similar to citation counts and journal impact factors. Only two search databases, Primo (ExLibris) and Scopus (Elsevier), offer the option to incorporate altmetrics into search results. However, you can use citation counts and journal impact factors—and their associated search strategies—as a good starting place and supplement your approach with altmetrics.
Many publisher websites are now using the services Altmetric.com and ImpactStory to document and display the impact of the articles they publish. If a publisher does not offer altmetrics on its website, you can provide the article’s DOI to ImpactStory (impactstory.org), free of charge, to discover the article’s metrics (including downloads and various altmetrics measures).
As the field continues to mature, expect many databases and publishers to begin to incorporate altmetrics for their search results. [As we went to press, the publisher John Wiley & Sons (wiley.com) announced that it would begin a 6-month trial using Macmillan’s Altmetric (altmetric.com) service to track social media sites including Twitter, Facebook, Google+, Pinterest, blogs, newspapers, magazines, and online reference managers such as Mendeley and CiteULike for mentions of scholarly articles published in Wiley journals, including Advanced Materials, Angewandte Chemie, BJU International, Brain and Behavior, Methods in Ecology and Evolution, and EMBO Molecular Medicine. Altmetric will create and display a score for each article measuring the quality and quantity of attention that the particular article receives. The Altmetric score is based on three main factors: the number of individuals mentioning a paper, where the mentions occurred, and how often the author of each mention talks about the article. — Ed.]
LOOKING AHEAD
In summary, no research metric is infallible, and no single metric can suss out the full value of scholarship. Use traditional research metrics, usage statistics, and altmetrics in tandem to identify all dimensions of quality research. Altmetrics are an especially useful instrument, built on web services with which many are familiar to help people make sense of the world of information.
Librarians are working in an exciting time. The glut of readily available information is both a blessing and a curse for the average searcher, which makes our role as information experts invaluable. Altmetrics are just one more tool we can keep handy to filter out the very best research on behalf of patrons.
References
1. Bar-Ilan, J., Haustein, S., Peters, I., Priem, J., Shema, H., & Terliesner, J. (2012). Beyond citations: Scholars’ visibility on the social web. Accepted to 17th International Conference on Science and Technology Indicators, Montreal, Canada, 5–8 Sept. 2012. (p. 14). Digital Libraries; Physics and Society, Montreal. Retrieved from arxiv.org/abs/1205.5611.
2. Davis, P. M., & Solla, L. R. (2003). An IP-level analysis of usage statistics for electronic journals in chemistry: Making inferences about user behavior. Journal of the American Society for Information Science and Technology, 54 (11), 1062–1068. doi:10.1002/asi.10302.
3. Evans, P., & Krauthammer, M. (2011). Exploring the use of social media to measure journal article impact. AMIA... Annual Symposium Proceedings / AMIA Symposium. AMIA Symposium, 2011, 374–81. Retrieved from pubmedcentral.nih.gov/articlerender.fcgi?artid=3243242&tool=pmcentrez&rendertype=abstract.
4. Eysenbach, G. (2011). Can tweets predict citations? Metrics of social impact based on Twitter and correlation with traditional metrics of scientific impact. Journal of Medical Internet Research, 13 (4). Retrieved from jmir.org/2011/4/e123.
5. Haustein, S., & Siebenlist, T. (2011). Applying social bookmarking data to evaluate journal usage. Journal of Informetrics, 5 (3), 457–446. Retrieved from dx.doi.org/10.1016/j.joi.2011.04.002.
6. Hernández-Borges, A. A., Macías-Cervi, P., Gaspar-Guardado, M. A., Torres-Alvarez de Arcaya, M. L., Ruiz-Rabaza, A., & Jiménez-Sosa, A. (1999). Can examination of WWW usage statistics and other indirect quality indicators distinguish the relative quality of medical web sites? Journal of Medical Internet Research, 1 (1), E1. doi:10.2196/jmir.1.1.e1.
7. Li, X., & Thelwall, M. (2012). F1000, Mendeley and Traditional Bibliometric Indicators. 17th International Conference on Science and Technology Indicators (Vol. 3, pp. 1–11).
8. Li, X., Thelwall, M., & Giustini, D. (2011). Validating online reference managers for scholarly impact measurement. Scientometrics, 91 (2), 1–11. doi:10.1007/s11192-011-0580-x.
9. Nielsen, F. (2007). Scientific citations in Wikipedia. First Monday, 12 (8). Retrieved from arxiv.org/pdf/0705.2106.
10. Priem, J., Piwowar, H. A., & Hemminger, B. M. (2012). Altmetrics in the wild: Using social media to explore scholarly impact. ArXiv.org. Retrieved from arxiv.org/abs/1203.4745.
11. Shema, H., Bar-Ilan, J., & Thelwall, M. (2012). Research blogs and the discussion of scholarly information. (C. A. Ouzounis, Ed.) PLOS ONE, 7 (5), e35869. doi:10.1371/journal.pone.0035869.
12. Thelwall, M., Haustein, S., Larivière, V., & Sugimoto, C. R. (Preprint). Do altmetrics work? Twitter and ten other social web. PLOS ONE. Retrieved from scit.wlv.ac.uk/w~cm1993/papers/Altmetrics_ preprintx.pdf.
13. Wolfe-Simon, F., Switzer Blum, J., Kulp, T. R., Gordon, G. W., Hoeft, S. E., Pett-Ridge, J., Stolz, J. F., Webb, P. K., Davies, P. C. W., Anbar, A. D., and Oremland, R. S. (2011). A bacterium that can grow by using arsenic instead of phosphorus. Science, 332 (6034), pp. 1163–1166. doi:10.1126/science.1197258.
Stacy Konkiel is science data management librarian, Indiana University–Bloomington.