What “counts” as evidence of impact? (Part 2 of 2)

by Farah Friesen
Centre for Faculty Development (CFD)
University of Toronto and St. Michael’s Hospital

In February’s post, I proposed a critical look at what counts as evidence of research impact, beyond traditional metrics (grants, publications, and peer-reviewed conference presentations). I work specifically in the medical/health professions education context and so wanted to find alternative indicators, beyond traditional and altmetrics. Below, I will share some of these resources with you.

Bernard Becker Medical Library Model for Assessment of Research Impact.1 The Becker Library Model advances 5 pathways of diffusion to track biomedical research impact:
1. Advancement of knowledge
2. Clinical Implementation
3. Community Benefit
4. Legislation and Policy
5. Economic Benefit
Each of these 5 pathways have indicators (some indicators are found in more than one pathway). While the Becker Library Model includes traditional indicators, they suggest some novel impact indicators:2
• valuing collaborations as an indicator of research output/impact
• tracking data sharing, media releases, appearances, or interviews, mobile applications/websites, and research methodologies as evidence of impact
This Model has great indicators to consider for biomedical research impact, but many of the indicators do not apply to medical/health professions education (e.g. Patents, quality of life, clinical practice guidelines, medical devices, licenses, etc).

Kuruvilla et al (2006)3 developed the Research Impact Framework (RIF) as a way to advance “coherent and comprehensive narratives of actual or potential research impacts” focusing on health services research. The RIF maps out 4 types of impact:
1. Research-related impacts
2. Policy impacts
3. Service impacts
4. Societal impacts
Each type of impact area has specific indicators associated with it. Novel indicators include: Definitions and concepts (e.g. the concept of equity in health care financing), ethical debates and guidelines, email/listserv discussions, media coverage. RIF suggests many indicators applicable to non-clinical/biomedicine disciplines.

Tracking collaborations (Becker) and email/listserv discussions (RIF) as research impact, I started to wonder what other types of research dissemination activities we might not have traditionally counted, but which are, in fact, demonstrative of impact. I have coined a term for this type of indicator: Grey Metrics.

Grey metrics denote metrics that are stumbled-upon and serendipitous, but for which there is not really a systematic way to track. These can include personal email asks or phone conversations that actually denote impact. I call it “grey metrics” because it’s kind of like grey literature searching. Grey metrics might include:
• slide sharing (not in repository, but when it’s a personal ask)
• informal consultations (e.g. through email, about a topic or your previous research. These email connections can sometimes inform other individuals’ research – sometimes even for them to develop projects that have won awards. So even if the consultations are informal via email, it shows how one’s research and guidance has an impact!)
• service as expert on panels, roundtables (shows that your research/practice expertise and knowledge are valued)
• curriculum changes based on your research (e.g. if your research informed curriculum change, or if your paper is included in a curriculum, which might lead to transformative education)
• citation in grey literature (e.g. mentions in keynote addresses, other conference presentations)

An example of grey metrics: my supervisor (Stella Ng, Director of Research, CFD) and colleague (Lindsay Baker, Research and Education Consultant, CFD) developed a talk on authorship ethics. One of the CFD Program directors heard a brief version of this talk and asked for the slides. That Program director (who also happens to be Vice-Chair of Education for Psychiatry at the University of Toronto) has shared those slides with her department and now they are using the content from those slides around authorship ethics to guide all their authorship conversations, to ensure ethical practice. In addition, Stella and Lindsay developed an authorship ethics simulation game to teach about authorship ethics issues. A colleague asked for this game to be shared and it has now been used in workshops at other institutions. These were personal asks from a colleague, but which demonstrate impact in terms of how Stella and Lindsay’s research is being applied in education to prepare health professionals for ethical practice in relation to authorship. Tracking these personal requests builds a strong case of impact beyond traditional metrics or altmetrics.

There is interesting work coming out of management learning & education4 and arts-based research5 examining different ways to think about impact. The National Information Standards Organization (NISO) is also working on identifying/defining alternative outputs in scholarly communications and appropriate calculation methodologies.6 The NISO Phase 2 documents were open for public comment (to April 20, 2016), but are now closed, but check the website for their revised documents.

As we work on broadening our conceptions of what counts as research impact, we must try to resist the urge to further quantify our achievements (and worth) as researchers. These blog posts are not meant to be prescriptive about what types of indicators to track. I want to encourage researchers to think about what indicators are most appropriate and align best with their context and work.

We must always be cognizant and vigilant that the time we spend tracking impact could often be better spent doing work that has impact.

References:
1. Becker Medical Library. Assessing the Impact of Research. 2016. Available at: https://becker.wustl.edu/impact-assessment. Accessed July 20, 2016.
2. Becker Medical Library. The Becker List: Impact Indicators. February 04, 2014. Available at: https://becker.wustl.edu/sites/default/files/becker_model-reference.pdf. Accessed July 20, 2016.
3. Kuruvilla S, Mays N, Pleasant A, Walt G. Describing the impact of health research: a Research Impact Framework. BMC Health Services Research. 2006;6(1):1. doi:10.1186/1472-6963-6-134
4. Aguinis H, Shapiro DL, Antonacopoulou EP, Cummings TG. Scholarly impact: A pluralist conceptualization. Academy of Management Learning & Education. 2014;13(4):623-39. doi:10.5465/amle.2014.0121
5. Boydell KM, Hodgins M, Gladstone BM, Stasiulis E, Belliveau G, Cheu H, Kontos P, Parsons J. Arts-based health research and academic legitimacy: transcending hegemonic conventions. Qualitative Research. 2016 Mar 7 (published online before print). doi:10.1177/1468794116630040
6. National Information Standards Organization. Alternative Metrics Initiative. 2016. Available at: http://www.niso.org/topics/tl/altmetrics_initiative/#phase2. Accessed July 20, 2016.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Some Musings on Metrics

by Marjorie Mitchell
Librarian, Learning and Research Services
UBC Okanagan Library

As the first few weeks of the new academic year wrap up in Canada, academic librarians can now shift their focus from orienting new students back to supporting faculty and graduate students, especially research focused support. Many researchers are preparing grant funding applications for the fall round of deadlines and the systems for assessing these applications is becoming ever more complex.

As research funding becomes a global competition, how are funders to decide which research deserves their support?

Over the past few years, global discussions regarding various metrics determining research impact have increased. Within their institutional research communications, administrators use impact metrics to compare their institutions to others, either nationally or internationally. Within their funding applications, researchers use impact factors to indicate the importance and worthiness of their research. One real appeal of metrics is that they are tangible, objective measures of the real use of a product of scholarly research. Or are they?

Since the 1950s, bibliographic citation databases have been in continuous development and have formed a broad base for different publication metrics, especially article and journal metrics. These metrics have not been without issues, not the least of which is the variation in citation patterns between disciplines and the potential for researchers to attempt to “play” the system to make it appear that their research has had greater impact than it actually has had.

Coined in 2010 by Priem, “alternative metrics” measure the impact of newer, non-traditional forms of scholarship published and discussed outside academic journals or conference proceedings. Digital humanities, community-involved research, and emerging forms of scholarship prove challenging for grant funding bodies and administrators to assess. Interestingly, books have neither been extensively covered in the bibliographic citation databases nor have been the subject of computerized citation analysis to the same degree as journal articles or new, non-traditional forms of scholarly publications. All of these instances are fertile ground for conversations led by librarians.

Does this matter?

Institutionally, librarians can help both researchers and administrators to gain a fuller understanding of the uses, and potential pitfalls from misuse, of metrics of all varieties. The broader the understanding of the subtleties of metrics, the less likely they are to be misunderstood and/or misrepresented. Ultimately, this greater understanding could form the basis for a more balanced and equitable story of research happening within our universities.

Priem, J., & Hemminger, B. (2010). Scientometrics 2.0: New metrics of scholarly impact on the social Web. First Monday, 15(7). Retrieved September 21, 2015, from http://firstmonday.org/ojs/index.php/fm/article/view/2874/2570

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Altmetrics: what does it measure? Is there a role in research assessment? C-EBLIP Journal Club April 6, 2015

by Li Zhang
Science and Engineering Libraries, University of Saskatchewan

Finally, I had the opportunity to lead the C-EBLIP Journal Club on April 6, 2015! This was originally scheduled for January, but was cancelled due to my injury. The article I chose was:

How well developed are altmetrics? A cross-disciplinary analysis of the presence of ‘alternative metrics’ in scientific publications. By Zohreh Zahedi, Rodrigo, Costas, and Paul Wouters. Scientometrics, 2014, Vol.101(2), pp.1491-1513.

There are several reasons why I chose this article on altmetrics. First, in the University of Saskatchewan Library, research is part of our assignment of duties. Inevitably, how to evaluate librarians’ research outputs has been a topic of discussion in the collegium. Citation indicators are probably the most widely used tool for evaluation of publications. But with the advancement of technology and different modes of communications, how to capture the impact of scholarly activities from those alternative venues? Altmetrics seems to be a timely addition to the discussion. Second, altmetrics is an area I am interested in developing my expertise. My research interests encompass bibliometrics and its application in research evaluation; therefore, it is natural to extend my interests to this new emerging field. Third, this paper not only presents detailed information on the methods used in this research but also provides a balanced view about altmetrics, thus helping us to understand how altmetric analysis is conducted and to be aware of the issues around this new metrics as well.

We briefly discussed the methodology and main findings in the article. Some of the interesting findings include: Mendeley readership was probably the most useful source for altmetrics, while the mentioning of the publications in other types of media (such as twitter, delicious, and Wikipedia) was very low; Mendeley readership counts also had a moderate positive correlation to citation counts; in some fields of social sciences and humanities, altmetric counts were actually higher than citation counts, suggesting altmetrics could be a potentially useful tool for capturing impact of scholarly publications from different sources in these fields, in addition to citation indicators.

Later in the session, we discussed a couple of issues related to altmetrics. Although measuring the impact of scholarly publications in alternative sources has gained notice, it is not yet clear why publications are mentioned in these sources. What kind of impact does Altmetrics measure? In traditional citation indicators, at least we know that the cited articles stimulated or informed the current research in some way (either positive or negative). In contrast, a paper appearing in Mendeley does not necessarily mean it is read. Similarly, a paper mentioned in Twitter could be just self-promotion (there is nothing wrong with it!). From here, we extended our discussion to publishing behaviours and promotion strategies. Are social scientists more likely to use social media to promote their research and publications than natural scientists? The award criteria and merit system in academia will also play a role. If altmetrics is counted as an indication of the quality of the publications, we may see a sudden surge of social media use by researchers. Further, it is much easier to manipulate altmetrics than citation metrics. Care needs to be taken before we can confidently use altmetrics as a reliable tool to measure scholarly activities.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.