Measuring impact in the age of altmetrics

Image courtesy of the Unseen Words Project

Earlier this semester, during my course on academic libraries and scholarly communications, we discussed peer review: its purpose and relation to the tenure process, alternatives to it, and the library’s role within it. One of the pieces we were assigned to read in preparation for this class session was DORA: San Francisco Declaration on Research Assessment (2013), which criticizes the use of journal impact factor for determining the merit of a researcher’s contributions and also addresses the need to stop assessing research based on the journal in which it was published. Furthermore, DORA recommends that academia start to capitalize more on the opportunities which online publishing presents and urges it to be more inclusive of non-article research outputs. This document, part critique and part manifesto, has contributed to a growing conversation over the past few years on how the full impact of research cannot be measured by citation counts.

Cue the emergence of alternative metrics or, altmetrics: nontraditional, web-based, metrics used for assessing broader, social impact.

Through online tools and environments, altmetrics help track citations online as well as give credit to a researcher’s scholarly contributions outside the traditional venue of peer-reviewed academic journals. These unorthodox, measurable forms of data could include number of article downloads, number of mentions on Facebook, and unique website visits just to name a few. So what exactly is so significant about altmetrics? How do we determine their impact? And is it true that social media and related environments are the next frontier for communicating scholarship? I intend to get to the bottom of this debate, and do a little altmetric work of my own in the process. Before diving into the world of altmetrics however, I’d like to discuss journal impact factor a bit further.

Why look for an alternative to impact factor? The main criticism (by DORA and others) is that journal impact factor has been misused. It was created as a tool for librarians to measure the impact of journals–not for measuring the impact of the works of an individual. Furthermore, it is argued that impact factor promotes “gaming the system,” which prevents important works that may not receive a large number of citations from being published while simultaneously encouraging the commission of review articles as they commonly receive a large number of citations. Lastly, journal impact factor only applies to journals, and therefore it overlooks large areas of scholarship such as books, book chapters, datasets, slides, videos, visualizations and other digital humanities projects. In order to encourage and promote these types of scholarship, it is important that we support these researchers by recognizing the impact of this type of work in addition to traditional research articles.

Those in favor of altmetrics assert that the landscape of scholarly communication has changed, and that measures for determining impact should evolve and change with it. Joe Esposito, a management consultant working in the digital media, software, and publishing realms, likens this to the use of metrics in modern-day conventional publishing in Information Today (2013):

In the past, a magazine’s circulation was sufficient for establishing the value of advertising. But in today’s online world, we also have metrics like page views, unique visits, and links to Facebook and Twitter. New circumstances support new metrics, a fact that is as true for scholarly research as it is for advertising.

Proponents of altmetrics argue that it is a much-needed addition in making hiring, tenure, and funding decisions, and that a scholars’ worth should be measured by more than just their impact factor. These supporters view social media platforms such as Twitter as an online water cooler or a global faculty lounge that leaves behind traces of data for evaluation. This can be seen as a revolutionary new environment, which had previously been unable to be studied as there was no way to track these types of activities. Furthermore, the peer-review process takes time both in getting an article published and in generating citations. In a publish-or-perish culture, many feel that altmetrics can aid in the struggles of newer researchers to demonstrate their merit.

Those opposed to altmetrics assert that, although they may offer some beneficial preliminary findings for researchers, this area is too vague to be considered for evaluating serious research. Skeptics question the reliability of unregulated social-media activities as meaningful measures of scholarship, and have voiced their concerns over the susceptibility to manipulation of this data. Many of these opponents feel that while altmetrics have the potential to become valuable in the future, they currently do not dig deep enough, in reach and in scope, to accurately measure social impact.

In order to better understand this debate, I decided to conduct a brief examination of the impact of one scholar’s research via traditional citation metrics versus via altmetrics. In the interest of accuracy, it was important I choose someone both well-established and relatively new to their chosen field. In other words, in an attempt to get a fair reading, I needed a scholar whose peer-reviewed publications have been around long enough to garner significant citations, but whose work does not predate the rise of social media. I decided to study the output of Dr. Michael Zimmer, a privacy and Internet ethics scholar as he fits this bill nicely. Zimmer is an Associate Professor in the School of Information Studies at the University of Wisconsin-Milwaukee. He also serves as Director of the Center for Information Policy Research located there as well.

According to Google Scholar Zimmer has 58 publications, 35 of which have been cited a total of 578 times. This earns him an h-index of 12 which, according to these two sources, places him somewhere in the realm of average or just above average impact for someone in the Social Sciences. Similarly, he also has an i10 index of 14.

To measure Zimmer’s altmetrics, I gathered publically available data from Academia.edu, which placed the number of views his papers received in the top 5% of total views for the site’s past 30 days. I also examined SlideShare and utilized the tool ImpactStory to gather more contextual insights. I learned that of the 35,000+ views of Zimmer’s 22 presentations, 7 of these presentations are “highly viewed” by the public (see image below) and 4 of those 7 are also “recommended” (i.e., favorited by the SlideShare community).

ImpactStory Screen Grab

Screen grab of ImpactStory explaining “highly viewed”

Another freely available tool that proved informative was the Altmetric It bookmarklet, which can be installed into Chrome, Firefox, and Safari, and gathers article level metrics such as Tweets, blog references, readers on Mendeley, and more. Drawbacks to this tool are that it currently only works on pages containing a DOI and that it can only track Twitter mentions for articles published since July 2011. Furthermore, several of Zimmer’s papers received Altmetric scores based on being referenced on a blog which, upon further investigation, turned out to be blog posts from Zimmer’s own website where he was announcing the publication of his papers (see image below). Although this is clearly not a case of obfuscation or cheating the system, it does illustrate the concerns raised by altmetrics skeptics as to the unregulated ease of manipulating this data. A plethora of other free altmetrics tools exist as well, which allow scholars to share and review each others work. These include ResearchGate, Peer Evaluation, CiteULike, figshare, PaperCritic, and Scholarometer.

Altmetric It Bookmarklet screen grab

Altmetric It Bookmarklet screen grab

Based on my research, I have found that altmetrics are somewhat inappropriately named. I feel that they are not so much an alternative form of metrics, but rather an augmentative one. Moreover, I see altmetrics as a complement, not a replacement, to more traditional citation metrics in the age of digital scholarship as these measures can tell very different stories. Even in my brief research, my impression of Dr. Michael Zimmer based on the data gleaned from SlideShare and Academia.edu was different than the impression I had gotten simply from his h and i10 indices. However, by combining my knowledge and impressions from both of these arenas, I feel I was able to get a much more well-rounded picture of this scholar’s impact in academia as a whole.

It is unclear what the future holds as this new era of scholarly communication has only just begun. In recent years, organizations such as NISOPLOS, and Plum Analytics have organized events to discuss as well as to begin to establish best practices in this emerging field of study. As these conferences, webinars, and other events continue, altmetrics could potentially become a more standardized practice ultimately yielding more precise, auditable results. As our networked society continues to grow in complexity, it seems only natural that our methods for assessment should grow and expand right along with us. The question now becomes, how do we analyze and evaluate these multivariate contributions in a way that ensures the results are meaningful?

Further readings:

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s