Impactful research

by Nicole Eva-Rice, Liaison Librarian for Management, Economics, Political Science, and Agriculture Studies, University of Lethbridge Library

Why do we do research? Is it simply to fulfill our obligations for tenure and promotion? Is it to satisfy our curiosity about some phenomenon? Or is it to help our fellow librarians (or researchers in another discipline) to do their jobs, or further the knowledge in our field?

I find myself grappling with these thoughts when embarking on a new research project. Sometimes it’s difficult to see the point of our research when we are stuck on the ‘publish or perish’ hamster wheel, and I suspect it’s all the more so for faculty outside of librarianship. It’s wonderful when we have an obvious course set out for us and can see the practical applications of our research – finding a cure for a disease, for example, or a way to improve school curriculum – but what if the nature of our research is more esoteric? Does the world need another article on the philosophy of librarianship, or the creative process in research methods? Or are these ‘make work’ projects for scholars who must research in order to survive in academe?

My most satisfying research experiences, and the ones I most appreciate from others, have to do with practical aspects of my job. I love research that can directly inform my day to day work, and know that any decisions I make based on that research have been grounded in evidence. If someone has researched the effectiveness of flipping a one-shot and can show me if it’s better or worse than the alternative, I am very appreciative of their efforts both in performing the study and publishing their results as I can benefit directly from their experience. Likewise, if someone publishes an article on how they systematically analyzed their serials collections to make cuts, I can put their practices to use in my own library. I may not cite those articles – in fact, most people won’t unless they do further research along that line – but they have a direct impact on the field of librarianship. Unfortunately, that impact is invisible to the author/researchers, unless we make a point of making contact with them and telling them how we were able to apply their research in our own institutions (and I don’t know about you, but I have never done that nor has it occurred to me to do that until just this minute). So measuring ‘impact’ by citations, tweets, or downloads just doesn’t do justice to the true impact of that article. Even a philosophy of librarianship article could have serious ‘impact’ in the way that it affects the way someone approaches their job – but unless the reader goes on to write another article citing it, that original article doesn’t have anything that proves the very real impact it has made.

In fact, the research doesn’t even have to result in a scholarly article – if I read a blog post on some of these topics, I might still be able to benefit from them and use the ideas in my own practice. Of course, this depends on exactly what the content is and how much rigor you need in replicating the procedure in your own institution, but sometimes I find blog posts more useful in my day-to-day practice than the actual scholarly articles. Even the philosophical-type posts are more easily digested and contemplated in the length and tone provided in a more informal publication.

This is all to say that I think the way we measure and value academic research is seriously flawed – something many librarians (and other academics) would agree with, but that others in academia still strongly adhere to. This is becoming almost a moral issue for me. Why does everything have to be measurable? Why can’t STP committees take the research project as described at face value, and accept other types of impact it could have on readers/policy makers/practitioners rather than assigning a numerical value based on where it was published and how many times it was cited?

When I hear other faculty members discussing their research, even if I don’t know anything about their subject area, I can often tell if it will have ‘real’ impact or not. The health sciences researcher whose report to the government resulted in policy change obviously had a real impact – but she won’t have a peer-reviewed article to list on her CV (unless she goes out of her way to create one to satisfy the process) nor will she likely have citations (unless the aforementioned article is written). It also makes me think about my next idea for a research project, which is truly just something I’ve been curious about, but which I can’t see many practical implications for other than to serve others’ curiosity. It’s a departure for me because I am usually the most practical of people and my research usually has to serve the dual purpose of both having application in my current workplace as well as becoming fodder for another line on my CV. As I have been thinking about the implication of impact more and more, I realize that as publicly paid employees, perhaps we have an obligation to make our research have as wide a practical impact as possible. What do you think? Have we moved beyond the luxury of researching for research’s sake? As employees of public institutions, do we have a societal impact to produce practical outcomes? I’m curious as to what others think and would love to continue the conversation.

For more on impact and what can count as evidence of it, please see Farah Friesen’s previous posts on this blog, What “counts” as evidence of impact? Part 1 and Part 2.


This article gives the views of the author and not necessarily the views the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

What “counts” as evidence of impact? (Part 2 of 2)

by Farah Friesen
Centre for Faculty Development (CFD)
University of Toronto and St. Michael’s Hospital

In February’s post, I proposed a critical look at what counts as evidence of research impact, beyond traditional metrics (grants, publications, and peer-reviewed conference presentations). I work specifically in the medical/health professions education context and so wanted to find alternative indicators, beyond traditional and altmetrics. Below, I will share some of these resources with you.

Bernard Becker Medical Library Model for Assessment of Research Impact.1 The Becker Library Model advances 5 pathways of diffusion to track biomedical research impact:
1. Advancement of knowledge
2. Clinical Implementation
3. Community Benefit
4. Legislation and Policy
5. Economic Benefit
Each of these 5 pathways have indicators (some indicators are found in more than one pathway). While the Becker Library Model includes traditional indicators, they suggest some novel impact indicators:2
• valuing collaborations as an indicator of research output/impact
• tracking data sharing, media releases, appearances, or interviews, mobile applications/websites, and research methodologies as evidence of impact
This Model has great indicators to consider for biomedical research impact, but many of the indicators do not apply to medical/health professions education (e.g. Patents, quality of life, clinical practice guidelines, medical devices, licenses, etc).

Kuruvilla et al (2006)3 developed the Research Impact Framework (RIF) as a way to advance “coherent and comprehensive narratives of actual or potential research impacts” focusing on health services research. The RIF maps out 4 types of impact:
1. Research-related impacts
2. Policy impacts
3. Service impacts
4. Societal impacts
Each type of impact area has specific indicators associated with it. Novel indicators include: Definitions and concepts (e.g. the concept of equity in health care financing), ethical debates and guidelines, email/listserv discussions, media coverage. RIF suggests many indicators applicable to non-clinical/biomedicine disciplines.

Tracking collaborations (Becker) and email/listserv discussions (RIF) as research impact, I started to wonder what other types of research dissemination activities we might not have traditionally counted, but which are, in fact, demonstrative of impact. I have coined a term for this type of indicator: Grey Metrics.

Grey metrics denote metrics that are stumbled-upon and serendipitous, but for which there is not really a systematic way to track. These can include personal email asks or phone conversations that actually denote impact. I call it “grey metrics” because it’s kind of like grey literature searching. Grey metrics might include:
• slide sharing (not in repository, but when it’s a personal ask)
• informal consultations (e.g. through email, about a topic or your previous research. These email connections can sometimes inform other individuals’ research – sometimes even for them to develop projects that have won awards. So even if the consultations are informal via email, it shows how one’s research and guidance has an impact!)
• service as expert on panels, roundtables (shows that your research/practice expertise and knowledge are valued)
• curriculum changes based on your research (e.g. if your research informed curriculum change, or if your paper is included in a curriculum, which might lead to transformative education)
• citation in grey literature (e.g. mentions in keynote addresses, other conference presentations)

An example of grey metrics: my supervisor (Stella Ng, Director of Research, CFD) and colleague (Lindsay Baker, Research and Education Consultant, CFD) developed a talk on authorship ethics. One of the CFD Program directors heard a brief version of this talk and asked for the slides. That Program director (who also happens to be Vice-Chair of Education for Psychiatry at the University of Toronto) has shared those slides with her department and now they are using the content from those slides around authorship ethics to guide all their authorship conversations, to ensure ethical practice. In addition, Stella and Lindsay developed an authorship ethics simulation game to teach about authorship ethics issues. A colleague asked for this game to be shared and it has now been used in workshops at other institutions. These were personal asks from a colleague, but which demonstrate impact in terms of how Stella and Lindsay’s research is being applied in education to prepare health professionals for ethical practice in relation to authorship. Tracking these personal requests builds a strong case of impact beyond traditional metrics or altmetrics.

There is interesting work coming out of management learning & education4 and arts-based research5 examining different ways to think about impact. The National Information Standards Organization (NISO) is also working on identifying/defining alternative outputs in scholarly communications and appropriate calculation methodologies.6 The NISO Phase 2 documents were open for public comment (to April 20, 2016), but are now closed, but check the website for their revised documents.

As we work on broadening our conceptions of what counts as research impact, we must try to resist the urge to further quantify our achievements (and worth) as researchers. These blog posts are not meant to be prescriptive about what types of indicators to track. I want to encourage researchers to think about what indicators are most appropriate and align best with their context and work.

We must always be cognizant and vigilant that the time we spend tracking impact could often be better spent doing work that has impact.

References:
1. Becker Medical Library. Assessing the Impact of Research. 2016. Available at: https://becker.wustl.edu/impact-assessment. Accessed July 20, 2016.
2. Becker Medical Library. The Becker List: Impact Indicators. February 04, 2014. Available at: https://becker.wustl.edu/sites/default/files/becker_model-reference.pdf. Accessed July 20, 2016.
3. Kuruvilla S, Mays N, Pleasant A, Walt G. Describing the impact of health research: a Research Impact Framework. BMC Health Services Research. 2006;6(1):1. doi:10.1186/1472-6963-6-134
4. Aguinis H, Shapiro DL, Antonacopoulou EP, Cummings TG. Scholarly impact: A pluralist conceptualization. Academy of Management Learning & Education. 2014;13(4):623-39. doi:10.5465/amle.2014.0121
5. Boydell KM, Hodgins M, Gladstone BM, Stasiulis E, Belliveau G, Cheu H, Kontos P, Parsons J. Arts-based health research and academic legitimacy: transcending hegemonic conventions. Qualitative Research. 2016 Mar 7 (published online before print). doi:10.1177/1468794116630040
6. National Information Standards Organization. Alternative Metrics Initiative. 2016. Available at: http://www.niso.org/topics/tl/altmetrics_initiative/#phase2. Accessed July 20, 2016.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.