What “counts” as evidence of impact? (Part 1 of 2)

by Farah Friesen
Centre for Faculty Development (CFD)
University of Toronto and St. Michael’s Hospital

I work at the Centre for Faculty Development (CFD), a joint partnership between the University of Toronto and St. Michael’s Hospital, a fully affiliated teaching hospital. CFD is composed of educators and researchers in the medical education/health professions education field.

As a librarian who has been fully integrated into a research team, I have applied my training and skills to different aspects of this position. One of the areas for which I am now responsible is tracking the impact of the CFD.

While spending time tracking the work that CFD does, I have started to question what “counts” as evidence of impact and why certain types of impact are more important than others.

So what exactly is impact? This is an important question to discuss because how we define impact affects what we count, and what we choose to count changes our behaviour.

We are all familiar with the traditional metrics that count as impact in academia: 1) research, 2) teaching, and 3) service. Yet the three are not treated equally, with research often given the most weight when it comes time for annual reviews and tenure decisions.

What we select as indicators of impact actively shapes and constrains our focus and endeavours. If research is worth the most, might this not encourage faculty to put most of their efforts into research and less into teaching or service?

Hmm… does this remind us of something? Oh yes! Our allegiance to research is strong and reflected in other ways. Evidence-based practice also purports a “three legged stool” comprising 1) research evidence, 2) practice knowledge and expertise, and 3) client preferences and values,1 but research is often synonymous with evidence and most valued out of the three types of evidence that should be taken into consideration in EBP.2,3 It is not accidental that what is given most weight in academia is the same as what is given most weight in EBP: research. We have established similar hierarchies of what counts that permeate our scholarly work and our decision-making in practice!

Research impact is traditionally tracked through number of grants, publications, and citations (and maybe conference presentations). Attention to altmetrics is growing, but altmetrics tends to track these very same traditional research products, but speeds up the time between production and dissemination (the actual use or impact of altmetrics is a whole other worthy discussion).

Why is it that an academic’s impact (or an academic department’s impact) is essentially dependent on research impact?

There are practical reasons for this of course: research productivity influences an institution’s academic standing and influences the distribution of funding. As an example of the former, one of the performance indicators used by the University of Toronto is research excellence,4 and is based on comparing the number of publications and citations generated by UofT faculty (in sciences) to faculty at other Canadian institutions. For an example of the latter, one can refer to the UK’s Research Excellence Framework (REF) which assesses “the quality of research in UK higher education institutions”5 and allocates funding based on these REF scores.

While these practical considerations cannot be ignored, might it not benefit us to broaden our definition of impact in education scholarship? (Note that the comparisons of research excellence above are based on sciences faculty only. We must think critically about the type of metrics that are appropriate for different fields/disciplines).

This is tied to the question of the ‘value’ and purpose of education. What is it that we hope to achieve as educators and education researchers? The “rise of measurement culture”6 creates the expectation that “educational outcomes can and should be measured.”6 But those of us working in education intuit that there are potentially unquantifiable benefits in the work that we do.

  • How do we account for the broad range of educational impacts that we have?
  • How might we better capture the complex social processes/impacts in education?
  • What other types of indicators might we choose measure, to ‘make count’ as impact, beyond traditional metrics and altmetrics?
  • How do we encourage researchers/faculty to start conceiving of impact more broadly?

While considering these questions, we must be wary of the pressure to produce and play the tracking ‘game,’ lest we fall into “focus[ing] on what is measurable at the expense of what is important.”7

Part 2 in June will examine some possible responses to the questions above regarding alternative indicators to help (re)define educational impact more broadly. A great resource for further thoughts on the topic of impact and metrics: http://blogs.lse.ac.uk/impactofsocialsciences/

I would like to thank Stella Ng and Lindsay Baker for their collaboration and guidance on this work, and to Amy Dionne and Carolyn Ziegler for their support of this project.

  1. University of Saskatchewan. What is EBLIP? Centre for Evidence Based Library & Information Practice. http://library.usask.ca/ceblip/eblip/what-is-eblip.php. Accessed Feb 10, 2016.
  2. Mantzoukas S. A review of evidence-based practice, nursing research and reflection: levelling the hierarchy. J Clin Nurs. 2008;17(2):214-23.
  3. Mykhalovskiy E, Weir L. The problem of evidence-based medicine: directions for social science. Soc Sci Med. 2004;59(5):1059-69.
  4. University of Toronto. Performance Indicators 2014 Comprehensive Inventory. https://www.utoronto.ca/performance-indicators-2014-comprehensive-inventory. Accessed Feb 10, 2016.
  5. Higher Education Funding Council for England (HEFCE). REF 2014. Research Excellence Framework. http://www.ref.ac.uk/. Accessed Feb 10, 2016.
  6. Biesta G. Good education in an age of measurement: on the need to reconnect with the question of purpose in education. Educ Assess Eval Acc. 2009; 21(1): 33-46.
  7. Buttliere B. We need informative metrics that will help, not hurt, the scientific endeavor – let’s work to make metrics better. The Impact Blog. http://blogs.lse.ac.uk/impactofsocialsciences/2015/10/08/we-need-informative-metrics-how-to-make-metrics-better/. Published Oct 8, 2015. Accessed Feb 10, 2016.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Research and navigating the changing cataloging environment

by Donna Frederick
Services to Libraries, University of Saskatchewan

The nature of a technological disruption is that it interrupts the continuity between past and present. Traditions and tried and true methods may lose their effectiveness or even begin to fail outright. In disrupted environments, practitioners may find themselves lacking both the theory and experience to feel confident in making decisions and taking action. As I meet with the Copy Cataloging Group at the University Library each week, I am reminded of this reality as cataloguers bring the cataloging conundrums they encounter to the meeting.

In the environment of traditional cataloguing, the mental model of a catalogue record is that of a flat and linear container for descriptive information. The cataloging process was well-supported by a set of relatively concrete cataloging rules. However, in today’s environment where the mental model is multidimensional and characterized by the expression of various relationships among resources and resource characteristics, the old “rules” simply aren’t relevant anymore. Conundrums soon begin to arise as it becomes apparent that we are attempting to create complex multidimensional metadata in the MARC metadata container which only accommodates flat, linear records. Further difficulty is added to the situation when it is realized that not only do the new “guidelines” for creating metadata fail to address many day-to-day challenges, but searches of listserv archives and the posting questions to these lists reveals that neither the “experts” nor librarianship in general have viable solutions for many of these problems either. So then, how does the practice of metadata creation avoid being mired by unanswered questions and seemingly unresolvable challenges?

Ultimately, those involved with what is sometimes called the “reinvention of cataloguing” need a solid base of theory upon which to make decisions. Up until recently, cataloguers have been struggling with the FRBR model (Functional Requirements for Bibliographic Records), including the new RDA descriptive standard, and the concept of linked data. The gap between the conceptual models and the day to day practice of cataloguing is often experienced as being impossibly wide for many who have been trained in traditional cataloguing. Fortunately, IFLA (2015) has recently released their latest draft of the “Statement of International Cataloguing Principles” which will help bridge the existing gap by providing specific principles upon which decisions can be made. While the principles help to alleviate some of the abstraction created by the theoretical models, cataloguers face the day to day challenges of working in an “in-between land” where the theory and practice has begun to take root but the actual systems in which we create and use metadata is still largely based on concepts from the late 1960s. In addition to shifting our mental models, cataloguers are also charged with informing and sometimes re-educating other library workers about the morphing reality of the metadata. This metadata is central to many library processes ranging from discovery of and access to resources and information to functions such as acquisitions and interlibrary loan. Finding a way to effectively inform non-cataloguers about the new reality in a relevant and meaningful way remains one more challenge which has yet to be effectively addressed.

As I concluded a recent research project on the very topic of how to effectively communicate information about the new cataloging models and standards, I was reminded of the importance of research and evidence in professional practice. One measure of the effectiveness of training I was using in my study was to track changes in the volume and frequency of cataloguing questions asked over time. My hypothesis was that the introduction of training would lead to a reduction in questions but the reality was that a steady increase was observed. Puzzled by the results, I did an examination of the actual content of the questions to reveal an increasing complexity and thoughtfulness of questions over time. While in the past there were “cataloguing rules” which could be learned and mastered, in this new environment training didn’t actually lead to mastery. Instead training lead to a new and deeper level of understanding. The evidence suggests an ongoing learning process where the issue of mastery many not be relevant. Without purposely undertaking research and learning from the evidence, the nature of the impact of the disruption on the process of learning the new cataloguing models would likely not have been discovered. In fact, the lack of mastery and the ever-increasing number of questions would likely have been a source of frustration. This is a highly valuable finding both for the training of cataloguers and library staff in general and will inform the creation of positive and effective future learning experiences.

References
IFLA (2015). Statement of International Cataloguing Principles ICP Haag: International Federation of Library Associations and Institutions. Retrieved from: http://www.ifla.org/files/assets/cataloguing/icp/icp_2015_worldwide_review.pdf

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.