What “counts” as evidence of impact? (Part 2 of 2)

by Farah Friesen
Centre for Faculty Development (CFD)
University of Toronto and St. Michael’s Hospital

In February’s post, I proposed a critical look at what counts as evidence of research impact, beyond traditional metrics (grants, publications, and peer-reviewed conference presentations). I work specifically in the medical/health professions education context and so wanted to find alternative indicators, beyond traditional and altmetrics. Below, I will share some of these resources with you.

Bernard Becker Medical Library Model for Assessment of Research Impact.1 The Becker Library Model advances 5 pathways of diffusion to track biomedical research impact:
1. Advancement of knowledge
2. Clinical Implementation
3. Community Benefit
4. Legislation and Policy
5. Economic Benefit
Each of these 5 pathways have indicators (some indicators are found in more than one pathway). While the Becker Library Model includes traditional indicators, they suggest some novel impact indicators:2
• valuing collaborations as an indicator of research output/impact
• tracking data sharing, media releases, appearances, or interviews, mobile applications/websites, and research methodologies as evidence of impact
This Model has great indicators to consider for biomedical research impact, but many of the indicators do not apply to medical/health professions education (e.g. Patents, quality of life, clinical practice guidelines, medical devices, licenses, etc).

Kuruvilla et al (2006)3 developed the Research Impact Framework (RIF) as a way to advance “coherent and comprehensive narratives of actual or potential research impacts” focusing on health services research. The RIF maps out 4 types of impact:
1. Research-related impacts
2. Policy impacts
3. Service impacts
4. Societal impacts
Each type of impact area has specific indicators associated with it. Novel indicators include: Definitions and concepts (e.g. the concept of equity in health care financing), ethical debates and guidelines, email/listserv discussions, media coverage. RIF suggests many indicators applicable to non-clinical/biomedicine disciplines.

Tracking collaborations (Becker) and email/listserv discussions (RIF) as research impact, I started to wonder what other types of research dissemination activities we might not have traditionally counted, but which are, in fact, demonstrative of impact. I have coined a term for this type of indicator: Grey Metrics.

Grey metrics denote metrics that are stumbled-upon and serendipitous, but for which there is not really a systematic way to track. These can include personal email asks or phone conversations that actually denote impact. I call it “grey metrics” because it’s kind of like grey literature searching. Grey metrics might include:
• slide sharing (not in repository, but when it’s a personal ask)
• informal consultations (e.g. through email, about a topic or your previous research. These email connections can sometimes inform other individuals’ research – sometimes even for them to develop projects that have won awards. So even if the consultations are informal via email, it shows how one’s research and guidance has an impact!)
• service as expert on panels, roundtables (shows that your research/practice expertise and knowledge are valued)
• curriculum changes based on your research (e.g. if your research informed curriculum change, or if your paper is included in a curriculum, which might lead to transformative education)
• citation in grey literature (e.g. mentions in keynote addresses, other conference presentations)

An example of grey metrics: my supervisor (Stella Ng, Director of Research, CFD) and colleague (Lindsay Baker, Research and Education Consultant, CFD) developed a talk on authorship ethics. One of the CFD Program directors heard a brief version of this talk and asked for the slides. That Program director (who also happens to be Vice-Chair of Education for Psychiatry at the University of Toronto) has shared those slides with her department and now they are using the content from those slides around authorship ethics to guide all their authorship conversations, to ensure ethical practice. In addition, Stella and Lindsay developed an authorship ethics simulation game to teach about authorship ethics issues. A colleague asked for this game to be shared and it has now been used in workshops at other institutions. These were personal asks from a colleague, but which demonstrate impact in terms of how Stella and Lindsay’s research is being applied in education to prepare health professionals for ethical practice in relation to authorship. Tracking these personal requests builds a strong case of impact beyond traditional metrics or altmetrics.

There is interesting work coming out of management learning & education4 and arts-based research5 examining different ways to think about impact. The National Information Standards Organization (NISO) is also working on identifying/defining alternative outputs in scholarly communications and appropriate calculation methodologies.6 The NISO Phase 2 documents were open for public comment (to April 20, 2016), but are now closed, but check the website for their revised documents.

As we work on broadening our conceptions of what counts as research impact, we must try to resist the urge to further quantify our achievements (and worth) as researchers. These blog posts are not meant to be prescriptive about what types of indicators to track. I want to encourage researchers to think about what indicators are most appropriate and align best with their context and work.

We must always be cognizant and vigilant that the time we spend tracking impact could often be better spent doing work that has impact.

References:
1. Becker Medical Library. Assessing the Impact of Research. 2016. Available at: https://becker.wustl.edu/impact-assessment. Accessed July 20, 2016.
2. Becker Medical Library. The Becker List: Impact Indicators. February 04, 2014. Available at: https://becker.wustl.edu/sites/default/files/becker_model-reference.pdf. Accessed July 20, 2016.
3. Kuruvilla S, Mays N, Pleasant A, Walt G. Describing the impact of health research: a Research Impact Framework. BMC Health Services Research. 2006;6(1):1. doi:10.1186/1472-6963-6-134
4. Aguinis H, Shapiro DL, Antonacopoulou EP, Cummings TG. Scholarly impact: A pluralist conceptualization. Academy of Management Learning & Education. 2014;13(4):623-39. doi:10.5465/amle.2014.0121
5. Boydell KM, Hodgins M, Gladstone BM, Stasiulis E, Belliveau G, Cheu H, Kontos P, Parsons J. Arts-based health research and academic legitimacy: transcending hegemonic conventions. Qualitative Research. 2016 Mar 7 (published online before print). doi:10.1177/1468794116630040
6. National Information Standards Organization. Alternative Metrics Initiative. 2016. Available at: http://www.niso.org/topics/tl/altmetrics_initiative/#phase2. Accessed July 20, 2016.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

The Problem with the Present: C-EBLIP Journal Club, June 21, 2016

by Stevie Horn
University Archives and Special Collections, University of Saskatchewan

Article: Dupont, Christian & Elizabeth Yakel. “’What’s So Special about Special Collections?’ Or, Assessing the Value Special Collections Bring to Academic Libraries.” Evidence Based Library and Information Practice [Online], 8.2(2013): 9-21. https://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/19615/15221

I was pleased to have the opportunity to lead the last C-EBLIP Journal Club session of the season. I chose an article which looked at the difficulties in employing performance measures to assess the value of a special collection or archives to the academic library. The article has some failings in that it is written largely from a business perspective, and uses special collections and archives interchangeably in a way that becomes problematic if you consider the archives’ responsibility as a repository for institutional records (which go through many different phases of use)—however, it did serve as a useful springboard for our talk.

What interested me was that those present immediately latched on to the problem of “What about preservation value?” when considering the article’s model of measuring performance. The article poses that the best way to measure a special collection/archives’ “return on investment” is not simply by counting the number of times an item is used (a collection-based method), but rather by reporting the number of hours a user spends working with an item, and what the learning outcomes of that use are determined to be (a user-based method) (Dupont and Yakel, 11).

In some ways, a user-centric approach to measuring performance in archives and special collections makes good sense. A single researcher may spend five weeks exploring fifteen boxes, or taking a close look at a single manuscript, and so recording the user hours spent may prove a more accurate measure of use. To reinforce this, there are a number of difficulties in utilizing collection-based metrics with manuscript collections. Individual documents studied within an archival collection are almost impossible to track. Generally a file is treated as an “item”, and the number of files in a box might be averaged. The article points out, accurately, that this imprecision renders collection-based tabulation of archival documents, images, and ephemera virtually “meaningless” (Dupont and Yakel, 14).

However, if the end goal is determining “return on investment”, user-centric data also leaves out a large piece of the picture. This piece is the previously mentioned “preservation value”, or the innate value in safeguarding unique historical documents. Both collection-based and user-based metrics record current usage in order to determine the value of a collection at the present time. This in-the-present approach becomes problematic when applied to a special collections or archives, however, for the simple reason that these bodies not only preserve the past for study in the present, but also for study in the distant future.

To pull apart this problem of using present-based metrics to measure the worth of a future-purposed unit of the academic library, we look at the recent surge in scholarship surrounding aboriginal histories. As Truth and Reconciliation surfaces in the public consciousness, materials which may have been ignored for decades within archival/special collections are now in high demand. Questions of this nature accounted for approximately forty percent of our usage in the last month alone. Had collections-centric or user-centric metrics been applied for those decades of non-use, these materials would have appeared to be of little worth, and the special collections/archives’ “return on investment” may also have been brought into question. The persistence of archives and special collections in preserving unique historic materials regardless of patterns of use means that these materials can play a role in changing perspectives and changing lives nationwide.

If, as Albie Sachs says in his 2006 article on “Archives, Truth, and Reconciliation”, archives and special collections preserve history “for the unborn . . . not, as we used to think, to guard certainty [but] to protect uncertainty because who knows how the future might use those documents”, might not the employment of only present-centric metrics do more damage than good? (Sachs, 14). And, if the value of an archives or special collections cannot be judged solely in the present, but must take an unknown and unknowable future into account, perhaps the formulation of a truly comprehensive measure of “return on investment” in this field is impossible.

Sources:
Dupont, Christian & Elizabeth Yakel. “’What’s So Special about Special Collections?’ Or, Assessing the Value Special Collections Bring to Academic Libraries.” Evidence Based Library and Information Practice [Online], 8.2(2013): 9-21. https://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/19615/15221

Sachs, Albie. “Archives, Truth, and Reconciliation”. Archivaria , 62 (2006): pp. 1 -14.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Some Musings on Metrics

by Marjorie Mitchell
Librarian, Learning and Research Services
UBC Okanagan Library

As the first few weeks of the new academic year wrap up in Canada, academic librarians can now shift their focus from orienting new students back to supporting faculty and graduate students, especially research focused support. Many researchers are preparing grant funding applications for the fall round of deadlines and the systems for assessing these applications is becoming ever more complex.

As research funding becomes a global competition, how are funders to decide which research deserves their support?

Over the past few years, global discussions regarding various metrics determining research impact have increased. Within their institutional research communications, administrators use impact metrics to compare their institutions to others, either nationally or internationally. Within their funding applications, researchers use impact factors to indicate the importance and worthiness of their research. One real appeal of metrics is that they are tangible, objective measures of the real use of a product of scholarly research. Or are they?

Since the 1950s, bibliographic citation databases have been in continuous development and have formed a broad base for different publication metrics, especially article and journal metrics. These metrics have not been without issues, not the least of which is the variation in citation patterns between disciplines and the potential for researchers to attempt to “play” the system to make it appear that their research has had greater impact than it actually has had.

Coined in 2010 by Priem, “alternative metrics” measure the impact of newer, non-traditional forms of scholarship published and discussed outside academic journals or conference proceedings. Digital humanities, community-involved research, and emerging forms of scholarship prove challenging for grant funding bodies and administrators to assess. Interestingly, books have neither been extensively covered in the bibliographic citation databases nor have been the subject of computerized citation analysis to the same degree as journal articles or new, non-traditional forms of scholarly publications. All of these instances are fertile ground for conversations led by librarians.

Does this matter?

Institutionally, librarians can help both researchers and administrators to gain a fuller understanding of the uses, and potential pitfalls from misuse, of metrics of all varieties. The broader the understanding of the subtleties of metrics, the less likely they are to be misunderstood and/or misrepresented. Ultimately, this greater understanding could form the basis for a more balanced and equitable story of research happening within our universities.

Priem, J., & Hemminger, B. (2010). Scientometrics 2.0: New metrics of scholarly impact on the social Web. First Monday, 15(7). Retrieved September 21, 2015, from http://firstmonday.org/ojs/index.php/fm/article/view/2874/2570

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.