What “counts” as evidence of impact? (Part 2 of 2)

by Farah Friesen
Centre for Faculty Development (CFD)
University of Toronto and St. Michael’s Hospital

In February’s post, I proposed a critical look at what counts as evidence of research impact, beyond traditional metrics (grants, publications, and peer-reviewed conference presentations). I work specifically in the medical/health professions education context and so wanted to find alternative indicators, beyond traditional and altmetrics. Below, I will share some of these resources with you.

Bernard Becker Medical Library Model for Assessment of Research Impact.1 The Becker Library Model advances 5 pathways of diffusion to track biomedical research impact:
1. Advancement of knowledge
2. Clinical Implementation
3. Community Benefit
4. Legislation and Policy
5. Economic Benefit
Each of these 5 pathways have indicators (some indicators are found in more than one pathway). While the Becker Library Model includes traditional indicators, they suggest some novel impact indicators:2
• valuing collaborations as an indicator of research output/impact
• tracking data sharing, media releases, appearances, or interviews, mobile applications/websites, and research methodologies as evidence of impact
This Model has great indicators to consider for biomedical research impact, but many of the indicators do not apply to medical/health professions education (e.g. Patents, quality of life, clinical practice guidelines, medical devices, licenses, etc).

Kuruvilla et al (2006)3 developed the Research Impact Framework (RIF) as a way to advance “coherent and comprehensive narratives of actual or potential research impacts” focusing on health services research. The RIF maps out 4 types of impact:
1. Research-related impacts
2. Policy impacts
3. Service impacts
4. Societal impacts
Each type of impact area has specific indicators associated with it. Novel indicators include: Definitions and concepts (e.g. the concept of equity in health care financing), ethical debates and guidelines, email/listserv discussions, media coverage. RIF suggests many indicators applicable to non-clinical/biomedicine disciplines.

Tracking collaborations (Becker) and email/listserv discussions (RIF) as research impact, I started to wonder what other types of research dissemination activities we might not have traditionally counted, but which are, in fact, demonstrative of impact. I have coined a term for this type of indicator: Grey Metrics.

Grey metrics denote metrics that are stumbled-upon and serendipitous, but for which there is not really a systematic way to track. These can include personal email asks or phone conversations that actually denote impact. I call it “grey metrics” because it’s kind of like grey literature searching. Grey metrics might include:
• slide sharing (not in repository, but when it’s a personal ask)
• informal consultations (e.g. through email, about a topic or your previous research. These email connections can sometimes inform other individuals’ research – sometimes even for them to develop projects that have won awards. So even if the consultations are informal via email, it shows how one’s research and guidance has an impact!)
• service as expert on panels, roundtables (shows that your research/practice expertise and knowledge are valued)
• curriculum changes based on your research (e.g. if your research informed curriculum change, or if your paper is included in a curriculum, which might lead to transformative education)
• citation in grey literature (e.g. mentions in keynote addresses, other conference presentations)

An example of grey metrics: my supervisor (Stella Ng, Director of Research, CFD) and colleague (Lindsay Baker, Research and Education Consultant, CFD) developed a talk on authorship ethics. One of the CFD Program directors heard a brief version of this talk and asked for the slides. That Program director (who also happens to be Vice-Chair of Education for Psychiatry at the University of Toronto) has shared those slides with her department and now they are using the content from those slides around authorship ethics to guide all their authorship conversations, to ensure ethical practice. In addition, Stella and Lindsay developed an authorship ethics simulation game to teach about authorship ethics issues. A colleague asked for this game to be shared and it has now been used in workshops at other institutions. These were personal asks from a colleague, but which demonstrate impact in terms of how Stella and Lindsay’s research is being applied in education to prepare health professionals for ethical practice in relation to authorship. Tracking these personal requests builds a strong case of impact beyond traditional metrics or altmetrics.

There is interesting work coming out of management learning & education4 and arts-based research5 examining different ways to think about impact. The National Information Standards Organization (NISO) is also working on identifying/defining alternative outputs in scholarly communications and appropriate calculation methodologies.6 The NISO Phase 2 documents were open for public comment (to April 20, 2016), but are now closed, but check the website for their revised documents.

As we work on broadening our conceptions of what counts as research impact, we must try to resist the urge to further quantify our achievements (and worth) as researchers. These blog posts are not meant to be prescriptive about what types of indicators to track. I want to encourage researchers to think about what indicators are most appropriate and align best with their context and work.

We must always be cognizant and vigilant that the time we spend tracking impact could often be better spent doing work that has impact.

References:
1. Becker Medical Library. Assessing the Impact of Research. 2016. Available at: https://becker.wustl.edu/impact-assessment. Accessed July 20, 2016.
2. Becker Medical Library. The Becker List: Impact Indicators. February 04, 2014. Available at: https://becker.wustl.edu/sites/default/files/becker_model-reference.pdf. Accessed July 20, 2016.
3. Kuruvilla S, Mays N, Pleasant A, Walt G. Describing the impact of health research: a Research Impact Framework. BMC Health Services Research. 2006;6(1):1. doi:10.1186/1472-6963-6-134
4. Aguinis H, Shapiro DL, Antonacopoulou EP, Cummings TG. Scholarly impact: A pluralist conceptualization. Academy of Management Learning & Education. 2014;13(4):623-39. doi:10.5465/amle.2014.0121
5. Boydell KM, Hodgins M, Gladstone BM, Stasiulis E, Belliveau G, Cheu H, Kontos P, Parsons J. Arts-based health research and academic legitimacy: transcending hegemonic conventions. Qualitative Research. 2016 Mar 7 (published online before print). doi:10.1177/1468794116630040
6. National Information Standards Organization. Alternative Metrics Initiative. 2016. Available at: http://www.niso.org/topics/tl/altmetrics_initiative/#phase2. Accessed July 20, 2016.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.