Impactful research

by Nicole Eva-Rice, Liaison Librarian for Management, Economics, Political Science, and Agriculture Studies, University of Lethbridge Library

Why do we do research? Is it simply to fulfill our obligations for tenure and promotion? Is it to satisfy our curiosity about some phenomenon? Or is it to help our fellow librarians (or researchers in another discipline) to do their jobs, or further the knowledge in our field?

I find myself grappling with these thoughts when embarking on a new research project. Sometimes it’s difficult to see the point of our research when we are stuck on the ‘publish or perish’ hamster wheel, and I suspect it’s all the more so for faculty outside of librarianship. It’s wonderful when we have an obvious course set out for us and can see the practical applications of our research – finding a cure for a disease, for example, or a way to improve school curriculum – but what if the nature of our research is more esoteric? Does the world need another article on the philosophy of librarianship, or the creative process in research methods? Or are these ‘make work’ projects for scholars who must research in order to survive in academe?

My most satisfying research experiences, and the ones I most appreciate from others, have to do with practical aspects of my job. I love research that can directly inform my day to day work, and know that any decisions I make based on that research have been grounded in evidence. If someone has researched the effectiveness of flipping a one-shot and can show me if it’s better or worse than the alternative, I am very appreciative of their efforts both in performing the study and publishing their results as I can benefit directly from their experience. Likewise, if someone publishes an article on how they systematically analyzed their serials collections to make cuts, I can put their practices to use in my own library. I may not cite those articles – in fact, most people won’t unless they do further research along that line – but they have a direct impact on the field of librarianship. Unfortunately, that impact is invisible to the author/researchers, unless we make a point of making contact with them and telling them how we were able to apply their research in our own institutions (and I don’t know about you, but I have never done that nor has it occurred to me to do that until just this minute). So measuring ‘impact’ by citations, tweets, or downloads just doesn’t do justice to the true impact of that article. Even a philosophy of librarianship article could have serious ‘impact’ in the way that it affects the way someone approaches their job – but unless the reader goes on to write another article citing it, that original article doesn’t have anything that proves the very real impact it has made.

In fact, the research doesn’t even have to result in a scholarly article – if I read a blog post on some of these topics, I might still be able to benefit from them and use the ideas in my own practice. Of course, this depends on exactly what the content is and how much rigor you need in replicating the procedure in your own institution, but sometimes I find blog posts more useful in my day-to-day practice than the actual scholarly articles. Even the philosophical-type posts are more easily digested and contemplated in the length and tone provided in a more informal publication.

This is all to say that I think the way we measure and value academic research is seriously flawed – something many librarians (and other academics) would agree with, but that others in academia still strongly adhere to. This is becoming almost a moral issue for me. Why does everything have to be measurable? Why can’t STP committees take the research project as described at face value, and accept other types of impact it could have on readers/policy makers/practitioners rather than assigning a numerical value based on where it was published and how many times it was cited?

When I hear other faculty members discussing their research, even if I don’t know anything about their subject area, I can often tell if it will have ‘real’ impact or not. The health sciences researcher whose report to the government resulted in policy change obviously had a real impact – but she won’t have a peer-reviewed article to list on her CV (unless she goes out of her way to create one to satisfy the process) nor will she likely have citations (unless the aforementioned article is written). It also makes me think about my next idea for a research project, which is truly just something I’ve been curious about, but which I can’t see many practical implications for other than to serve others’ curiosity. It’s a departure for me because I am usually the most practical of people and my research usually has to serve the dual purpose of both having application in my current workplace as well as becoming fodder for another line on my CV. As I have been thinking about the implication of impact more and more, I realize that as publicly paid employees, perhaps we have an obligation to make our research have as wide a practical impact as possible. What do you think? Have we moved beyond the luxury of researching for research’s sake? As employees of public institutions, do we have a societal impact to produce practical outcomes? I’m curious as to what others think and would love to continue the conversation.

For more on impact and what can count as evidence of it, please see Farah Friesen’s previous posts on this blog, What “counts” as evidence of impact? Part 1 and Part 2.


This article gives the views of the author and not necessarily the views the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.