Impactful research

by Nicole Eva-Rice, Liaison Librarian for Management, Economics, Political Science, and Agriculture Studies, University of Lethbridge Library

Why do we do research? Is it simply to fulfill our obligations for tenure and promotion? Is it to satisfy our curiosity about some phenomenon? Or is it to help our fellow librarians (or researchers in another discipline) to do their jobs, or further the knowledge in our field?

I find myself grappling with these thoughts when embarking on a new research project. Sometimes it’s difficult to see the point of our research when we are stuck on the ‘publish or perish’ hamster wheel, and I suspect it’s all the more so for faculty outside of librarianship. It’s wonderful when we have an obvious course set out for us and can see the practical applications of our research – finding a cure for a disease, for example, or a way to improve school curriculum – but what if the nature of our research is more esoteric? Does the world need another article on the philosophy of librarianship, or the creative process in research methods? Or are these ‘make work’ projects for scholars who must research in order to survive in academe?

My most satisfying research experiences, and the ones I most appreciate from others, have to do with practical aspects of my job. I love research that can directly inform my day to day work, and know that any decisions I make based on that research have been grounded in evidence. If someone has researched the effectiveness of flipping a one-shot and can show me if it’s better or worse than the alternative, I am very appreciative of their efforts both in performing the study and publishing their results as I can benefit directly from their experience. Likewise, if someone publishes an article on how they systematically analyzed their serials collections to make cuts, I can put their practices to use in my own library. I may not cite those articles – in fact, most people won’t unless they do further research along that line – but they have a direct impact on the field of librarianship. Unfortunately, that impact is invisible to the author/researchers, unless we make a point of making contact with them and telling them how we were able to apply their research in our own institutions (and I don’t know about you, but I have never done that nor has it occurred to me to do that until just this minute). So measuring ‘impact’ by citations, tweets, or downloads just doesn’t do justice to the true impact of that article. Even a philosophy of librarianship article could have serious ‘impact’ in the way that it affects the way someone approaches their job – but unless the reader goes on to write another article citing it, that original article doesn’t have anything that proves the very real impact it has made.

In fact, the research doesn’t even have to result in a scholarly article – if I read a blog post on some of these topics, I might still be able to benefit from them and use the ideas in my own practice. Of course, this depends on exactly what the content is and how much rigor you need in replicating the procedure in your own institution, but sometimes I find blog posts more useful in my day-to-day practice than the actual scholarly articles. Even the philosophical-type posts are more easily digested and contemplated in the length and tone provided in a more informal publication.

This is all to say that I think the way we measure and value academic research is seriously flawed – something many librarians (and other academics) would agree with, but that others in academia still strongly adhere to. This is becoming almost a moral issue for me. Why does everything have to be measurable? Why can’t STP committees take the research project as described at face value, and accept other types of impact it could have on readers/policy makers/practitioners rather than assigning a numerical value based on where it was published and how many times it was cited?

When I hear other faculty members discussing their research, even if I don’t know anything about their subject area, I can often tell if it will have ‘real’ impact or not. The health sciences researcher whose report to the government resulted in policy change obviously had a real impact – but she won’t have a peer-reviewed article to list on her CV (unless she goes out of her way to create one to satisfy the process) nor will she likely have citations (unless the aforementioned article is written). It also makes me think about my next idea for a research project, which is truly just something I’ve been curious about, but which I can’t see many practical implications for other than to serve others’ curiosity. It’s a departure for me because I am usually the most practical of people and my research usually has to serve the dual purpose of both having application in my current workplace as well as becoming fodder for another line on my CV. As I have been thinking about the implication of impact more and more, I realize that as publicly paid employees, perhaps we have an obligation to make our research have as wide a practical impact as possible. What do you think? Have we moved beyond the luxury of researching for research’s sake? As employees of public institutions, do we have a societal impact to produce practical outcomes? I’m curious as to what others think and would love to continue the conversation.

For more on impact and what can count as evidence of it, please see Farah Friesen’s previous posts on this blog, What “counts” as evidence of impact? Part 1 and Part 2.


This article gives the views of the author and not necessarily the views the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

What “counts” as evidence of impact? (Part 2 of 2)

by Farah Friesen
Centre for Faculty Development (CFD)
University of Toronto and St. Michael’s Hospital

In February’s post, I proposed a critical look at what counts as evidence of research impact, beyond traditional metrics (grants, publications, and peer-reviewed conference presentations). I work specifically in the medical/health professions education context and so wanted to find alternative indicators, beyond traditional and altmetrics. Below, I will share some of these resources with you.

Bernard Becker Medical Library Model for Assessment of Research Impact.1 The Becker Library Model advances 5 pathways of diffusion to track biomedical research impact:
1. Advancement of knowledge
2. Clinical Implementation
3. Community Benefit
4. Legislation and Policy
5. Economic Benefit
Each of these 5 pathways have indicators (some indicators are found in more than one pathway). While the Becker Library Model includes traditional indicators, they suggest some novel impact indicators:2
• valuing collaborations as an indicator of research output/impact
• tracking data sharing, media releases, appearances, or interviews, mobile applications/websites, and research methodologies as evidence of impact
This Model has great indicators to consider for biomedical research impact, but many of the indicators do not apply to medical/health professions education (e.g. Patents, quality of life, clinical practice guidelines, medical devices, licenses, etc).

Kuruvilla et al (2006)3 developed the Research Impact Framework (RIF) as a way to advance “coherent and comprehensive narratives of actual or potential research impacts” focusing on health services research. The RIF maps out 4 types of impact:
1. Research-related impacts
2. Policy impacts
3. Service impacts
4. Societal impacts
Each type of impact area has specific indicators associated with it. Novel indicators include: Definitions and concepts (e.g. the concept of equity in health care financing), ethical debates and guidelines, email/listserv discussions, media coverage. RIF suggests many indicators applicable to non-clinical/biomedicine disciplines.

Tracking collaborations (Becker) and email/listserv discussions (RIF) as research impact, I started to wonder what other types of research dissemination activities we might not have traditionally counted, but which are, in fact, demonstrative of impact. I have coined a term for this type of indicator: Grey Metrics.

Grey metrics denote metrics that are stumbled-upon and serendipitous, but for which there is not really a systematic way to track. These can include personal email asks or phone conversations that actually denote impact. I call it “grey metrics” because it’s kind of like grey literature searching. Grey metrics might include:
• slide sharing (not in repository, but when it’s a personal ask)
• informal consultations (e.g. through email, about a topic or your previous research. These email connections can sometimes inform other individuals’ research – sometimes even for them to develop projects that have won awards. So even if the consultations are informal via email, it shows how one’s research and guidance has an impact!)
• service as expert on panels, roundtables (shows that your research/practice expertise and knowledge are valued)
• curriculum changes based on your research (e.g. if your research informed curriculum change, or if your paper is included in a curriculum, which might lead to transformative education)
• citation in grey literature (e.g. mentions in keynote addresses, other conference presentations)

An example of grey metrics: my supervisor (Stella Ng, Director of Research, CFD) and colleague (Lindsay Baker, Research and Education Consultant, CFD) developed a talk on authorship ethics. One of the CFD Program directors heard a brief version of this talk and asked for the slides. That Program director (who also happens to be Vice-Chair of Education for Psychiatry at the University of Toronto) has shared those slides with her department and now they are using the content from those slides around authorship ethics to guide all their authorship conversations, to ensure ethical practice. In addition, Stella and Lindsay developed an authorship ethics simulation game to teach about authorship ethics issues. A colleague asked for this game to be shared and it has now been used in workshops at other institutions. These were personal asks from a colleague, but which demonstrate impact in terms of how Stella and Lindsay’s research is being applied in education to prepare health professionals for ethical practice in relation to authorship. Tracking these personal requests builds a strong case of impact beyond traditional metrics or altmetrics.

There is interesting work coming out of management learning & education4 and arts-based research5 examining different ways to think about impact. The National Information Standards Organization (NISO) is also working on identifying/defining alternative outputs in scholarly communications and appropriate calculation methodologies.6 The NISO Phase 2 documents were open for public comment (to April 20, 2016), but are now closed, but check the website for their revised documents.

As we work on broadening our conceptions of what counts as research impact, we must try to resist the urge to further quantify our achievements (and worth) as researchers. These blog posts are not meant to be prescriptive about what types of indicators to track. I want to encourage researchers to think about what indicators are most appropriate and align best with their context and work.

We must always be cognizant and vigilant that the time we spend tracking impact could often be better spent doing work that has impact.

References:
1. Becker Medical Library. Assessing the Impact of Research. 2016. Available at: https://becker.wustl.edu/impact-assessment. Accessed July 20, 2016.
2. Becker Medical Library. The Becker List: Impact Indicators. February 04, 2014. Available at: https://becker.wustl.edu/sites/default/files/becker_model-reference.pdf. Accessed July 20, 2016.
3. Kuruvilla S, Mays N, Pleasant A, Walt G. Describing the impact of health research: a Research Impact Framework. BMC Health Services Research. 2006;6(1):1. doi:10.1186/1472-6963-6-134
4. Aguinis H, Shapiro DL, Antonacopoulou EP, Cummings TG. Scholarly impact: A pluralist conceptualization. Academy of Management Learning & Education. 2014;13(4):623-39. doi:10.5465/amle.2014.0121
5. Boydell KM, Hodgins M, Gladstone BM, Stasiulis E, Belliveau G, Cheu H, Kontos P, Parsons J. Arts-based health research and academic legitimacy: transcending hegemonic conventions. Qualitative Research. 2016 Mar 7 (published online before print). doi:10.1177/1468794116630040
6. National Information Standards Organization. Alternative Metrics Initiative. 2016. Available at: http://www.niso.org/topics/tl/altmetrics_initiative/#phase2. Accessed July 20, 2016.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

What “counts” as evidence of impact? (Part 1 of 2)

by Farah Friesen
Centre for Faculty Development (CFD)
University of Toronto and St. Michael’s Hospital

I work at the Centre for Faculty Development (CFD), a joint partnership between the University of Toronto and St. Michael’s Hospital, a fully affiliated teaching hospital. CFD is composed of educators and researchers in the medical education/health professions education field.

As a librarian who has been fully integrated into a research team, I have applied my training and skills to different aspects of this position. One of the areas for which I am now responsible is tracking the impact of the CFD.

While spending time tracking the work that CFD does, I have started to question what “counts” as evidence of impact and why certain types of impact are more important than others.

So what exactly is impact? This is an important question to discuss because how we define impact affects what we count, and what we choose to count changes our behaviour.

We are all familiar with the traditional metrics that count as impact in academia: 1) research, 2) teaching, and 3) service. Yet the three are not treated equally, with research often given the most weight when it comes time for annual reviews and tenure decisions.

What we select as indicators of impact actively shapes and constrains our focus and endeavours. If research is worth the most, might this not encourage faculty to put most of their efforts into research and less into teaching or service?

Hmm… does this remind us of something? Oh yes! Our allegiance to research is strong and reflected in other ways. Evidence-based practice also purports a “three legged stool” comprising 1) research evidence, 2) practice knowledge and expertise, and 3) client preferences and values,1 but research is often synonymous with evidence and most valued out of the three types of evidence that should be taken into consideration in EBP.2,3 It is not accidental that what is given most weight in academia is the same as what is given most weight in EBP: research. We have established similar hierarchies of what counts that permeate our scholarly work and our decision-making in practice!

Research impact is traditionally tracked through number of grants, publications, and citations (and maybe conference presentations). Attention to altmetrics is growing, but altmetrics tends to track these very same traditional research products, but speeds up the time between production and dissemination (the actual use or impact of altmetrics is a whole other worthy discussion).

Why is it that an academic’s impact (or an academic department’s impact) is essentially dependent on research impact?

There are practical reasons for this of course: research productivity influences an institution’s academic standing and influences the distribution of funding. As an example of the former, one of the performance indicators used by the University of Toronto is research excellence,4 and is based on comparing the number of publications and citations generated by UofT faculty (in sciences) to faculty at other Canadian institutions. For an example of the latter, one can refer to the UK’s Research Excellence Framework (REF) which assesses “the quality of research in UK higher education institutions”5 and allocates funding based on these REF scores.

While these practical considerations cannot be ignored, might it not benefit us to broaden our definition of impact in education scholarship? (Note that the comparisons of research excellence above are based on sciences faculty only. We must think critically about the type of metrics that are appropriate for different fields/disciplines).

This is tied to the question of the ‘value’ and purpose of education. What is it that we hope to achieve as educators and education researchers? The “rise of measurement culture”6 creates the expectation that “educational outcomes can and should be measured.”6 But those of us working in education intuit that there are potentially unquantifiable benefits in the work that we do.

  • How do we account for the broad range of educational impacts that we have?
  • How might we better capture the complex social processes/impacts in education?
  • What other types of indicators might we choose measure, to ‘make count’ as impact, beyond traditional metrics and altmetrics?
  • How do we encourage researchers/faculty to start conceiving of impact more broadly?

While considering these questions, we must be wary of the pressure to produce and play the tracking ‘game,’ lest we fall into “focus[ing] on what is measurable at the expense of what is important.”7

Part 2 in June will examine some possible responses to the questions above regarding alternative indicators to help (re)define educational impact more broadly. A great resource for further thoughts on the topic of impact and metrics: http://blogs.lse.ac.uk/impactofsocialsciences/

I would like to thank Stella Ng and Lindsay Baker for their collaboration and guidance on this work, and to Amy Dionne and Carolyn Ziegler for their support of this project.

  1. University of Saskatchewan. What is EBLIP? Centre for Evidence Based Library & Information Practice. http://library.usask.ca/ceblip/eblip/what-is-eblip.php. Accessed Feb 10, 2016.
  2. Mantzoukas S. A review of evidence-based practice, nursing research and reflection: levelling the hierarchy. J Clin Nurs. 2008;17(2):214-23.
  3. Mykhalovskiy E, Weir L. The problem of evidence-based medicine: directions for social science. Soc Sci Med. 2004;59(5):1059-69.
  4. University of Toronto. Performance Indicators 2014 Comprehensive Inventory. https://www.utoronto.ca/performance-indicators-2014-comprehensive-inventory. Accessed Feb 10, 2016.
  5. Higher Education Funding Council for England (HEFCE). REF 2014. Research Excellence Framework. http://www.ref.ac.uk/. Accessed Feb 10, 2016.
  6. Biesta G. Good education in an age of measurement: on the need to reconnect with the question of purpose in education. Educ Assess Eval Acc. 2009; 21(1): 33-46.
  7. Buttliere B. We need informative metrics that will help, not hurt, the scientific endeavor – let’s work to make metrics better. The Impact Blog. http://blogs.lse.ac.uk/impactofsocialsciences/2015/10/08/we-need-informative-metrics-how-to-make-metrics-better/. Published Oct 8, 2015. Accessed Feb 10, 2016.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.