Apparently Conferencing via Twitter is a Thing Now

by Christine Neilson
Information Specialist, St. Michael’s Hospital
Toronto, Ontario

Back in April, I saw a blog post about holding an entire conference – The World Seabird Twitter Conference (#WSTC2) – over Twitter. I know there’s usually tweeting happening at conferences, but holding the conference itself via Twitter? The conference was set up so that participants were given 15 minutes and a maximum of 6 tweets to present their work. Naturally, the audience could tweet their comments and questions, too. It blew my mind. And it got me thinking about the benefits and drawbacks of such a plan.

The first benefit of this form of presentation seems obvious; with no travel and a platform that’s freely accessible, participation is free. The downside with this is that – I think – conference content is only one of the reasons people go to conferences, and the face-to-face payoff is missing. Yes, you can theoretically “meet” new people via Twitter, but meeting new colleagues at social functions, having coffee with old colleagues you haven’t seen in a while, visiting vendor reps, and having an excuse to travel are all important parts of conference-going.

In terms of the format, WSTC speakers were allowed 840 characters and six images to get their message across, and that isn’t much. If you have a lot to say, that could be a problem. So perhaps a Twitter presentation is more like a poster presentation of sorts. The format is restrictive, but I think that could be a good thing. Presenters are forced to be clear and concise, and make use of meaningful graphics: all good things in my opinion. Also, not everybody is made for public speaking so this kind of venue might appeal to people who are intimidated by speaking to a crowd, or who aren’t particularly skilled presenters. And unlike webinars where if an audience member misses part of the presentation to deal with e-mail or other distractions, it’s easy to catch up if something draws their attention away. The tweets are also easy to retweet if they resonate with the audience, so we can hopefully say goodbye to ultra-vague tweets referencing conference presentations.

Would you be up for this kind of conference? I would be very disappointed if I never went to a real, in-person conference again, but I’m intrigued by the idea of having a conference via Twitter. One thing I do wonder about is what it would be like organizing such an event. You wouldn’t have to book a venue and order coffee, but you would still have to have a process in place for putting together the program and organizing the presenters. Would it be just as much work? More? Less? Perhaps this is something the EBLIP community might consider testing out for the “off” years between the international EBLIP conferences. I don’t know about you, but I’d participate.

Christine_tweet
Example of a tweet from the World Seabird Twitter Conference (#WSTC2), April 2016 https://twitter.com/Nina_OHanlon/status/720567956934148096

This article gives the views of the author and not necessarily the views of St. Michael’s Hospital, the Centre for Evidence Based Library and Information Practice, or the University Library, University of Saskatchewan.

Gathering Evidence by Asking Library Users about Memorable Experiences

by Kathleen Reed
Assessment and Data Librarian, Vancouver Island University

For this week’s blog, I thought I’d share a specific question to ask library users that’s proving itself highly useful, but that I haven’t seen used much before in library assessment:

“Tell me about a memorable time in the library.”

Working with colleagues Cameron Hoffman-McGaw and Meg Ecclestone, I first used this question during the in-person interview phase of an on-going study on information literacy (IL) practices in academic library spaces. In response, participants gave detailed accounts of studying with friends, moments that increased or decreased their stress levels, and insight into the 24/7 Learning Commons environment – a world that librarians at my place of work see very infrequently, as the library proper is closed after 10pm. The main theme of answers was the importance of supportive social networks that form and are maintained in the library.

The question was so successful in the qualitative phase of our IL study, I was curious how it might translate to another project – an upcoming major library survey that was to be sent to all campus library users in March, 2016. Here’s the text of the survey question that we used:

“Tell us about a memorable time in the library. It might be something that you were involved in, or that you witnessed. It might be a positive or negative experience.”

It wasn’t a required question; people were free to skip it. But 47% (404/851) of survey takers answered the question, and the answers ranged in length from a sentence to several paragraphs. While analysis isn’t complete on the data generated from this question, some obvious themes jump out. Library users wrote about how both library services and spaces help or cause anxiety and stress, the importance of social connections and accompanying support received in our spaces, the role of the physical environment, and the value placed on the library as a space where diverse people can be encountered, among many other topics.

To what end are we using data from this question? First, we’re doing the usual analysis – looking at the negative experiences and emotions users expressed and evaluating whether changes need to be made, policies created, etc. Second, the question helped surface some of the intangible benefits of the library, which we hadn’t spent a lot of time considering (emotional support networks, the library’s importance as a central place on campus where diverse groups interact). Now librarians are able to articulate a wider range of these benefits – backed up with evidence in the form of answers to the “memorable time” question – which helps when advocating for the library on campus, and connecting to key points in our Academic Plan document.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Fish, Meet Water: The Importance of Context in Research Design and Writing

by Frank Winter, Librarian Emeritus
University Library, University of Saskatchewan

Introduction
Issues of context are relatively under-examined in discussions of the research process but context is, I believe, critical to understanding and evaluating the results of research. In this sense the lack of context seems like a case of “Fish, meet water.” There are perhaps assumptions of personal and mutual understanding on all levels that might not survive a more critical examination.

There have been, however, a few direct discussions. One that I found particularly helpful is contained in the Journal of Organizational Behavior. A 2001 editorial specifically flagged the importance of addressing issues of context that authors, in the opinion of the editors, often left unaddressed or inadequately addressed in articles. They provided this helpful overview:

The term ‘context’ comes from a Latin root meaning ‘to knit together’ or ‘to make a connection.’ Contextualizing entails linking observations to a set of relevant facts, events, or points of view that make possible research and theory that form part of a larger whole. Contextualization can occur in many stages of the research process, from question formulation, site selection, and measurement to data analysis, interpretation, and reporting.

They continue by noting that, “The need to contextualize is reinforced by the emergence of a world- wide community of organizational scholars adding ever-greater diversity in settings as well as perspectives.” (Rousseau and Fried, 2001, p.1) Unvoiced, shared understandings were no longer possible in such a widespread community. The same is true, I suggest, in librarianship.

When I read the literature of librarianship or think about my own research projects, I often have to remind myself to be explicitly mindful of context. Below, I briefly explore two aspects of context: how the researcher can think about the larger context of the design of the research question; and how the researcher can reflect issues of context in the report of the research.

Considerations in Research Design
“On or about December 1910, human character changed,” Virginia Wolff famously wrote (1924). She was writing about the emergence of the modernist movement in English literature but her formulation of “on or about… “has spawned a host of imitators. I contend the world of research libraries changed radically on or about January 2000. That was the date that many librarians realized that there was now a critical mass of full-text high quality digital scholarly journal literature easily accessible through various databases or via the open Web. This change meant that the workflows of scholars at all levels could now bypass the library, a process explored in detail by experts such as Lorcan Dempsey. The result of this is what I have described elsewhere as the Gone-Away World (Winter, 2014). When reading scholarly research about university libraries and librarians, I always assess whether the research reflects this new world. And of course, when reading research conducted prior to 2000, I have to be alert to the different context in which that research was conducted.

Issues of context at this level of research design involve consideration of innumerable historical, socio-economic, technological, legal, institutional, and other environmental factors that might affect the design. And implicitly underlying all of these factors is that of time. Widely used texts on research design such as Creswell’s Research design: qualitative, quantitative, and mixed methods approaches (2014) do not address the issue of context in these terms and provide little guidance on what should be included and what should be excluded.

Perhaps Flyvbjerg’s concise advice is as much as can be said: “The drawing of boundaries for the individual unit of study defines what gets to count as case as well as what becomes context to the case” (2011, p. 301).  Where the boundary is drawn is the responsibility of the researcher and should reflect expertise and familiarity with the field and the research method. At this stage, having a defined program of research will be very helpful in deciding what is relevant to context (Winter, 2015).

Considerations in Research Reporting
“Don’t try to write everything you know,” was helpful advice given to me by a colleague as I was struggling with a dissertation-length piece of writing. This advice is even more pertinent for shorter pieces of writing such as scholarly articles. Besides the word limits imposed by the journal itself there is the common sense need to shape a research report such that it is both coherent as well as interesting for its intended audience. Research reports that pack too much into their text are confused and confusing and, ultimately, irritating. How, then, can relevant issues of context be reflected in the text?

In the same issue of the Journal of Organizational Behavior cited above, one of their peer reviewers critiques one of the articles using the perspective advocated by the editors. He notes that,

“understanding both substantive and methodological context permits the reader to put the entire research report in context. Both forms of context do this when they provide information relevant to the theoretical approach being used or to the intersection between this theory and the chosen method. Context for its own sake is to be avoided as non-sequitur” (Johns, p. 32).

Reflecting a bit on this guidance, perhaps issues of context can be directly reflected in the literature review, methods, and limitations sections of an article as well, perhaps, indirectly in the introduction. Johns provides many different suggestions. They do not need to be elaborated but there should be some sense that the researcher is aware of the larger environment.

Conclusion
The reader’s understanding of the context of the research is essential to an informed reception of the author’s work. Attention paid to issues of context at the design and the reporting stages will address this need.

References
Creswell, J.W. (2014). Research design: qualitative, quantitative, and mixed methods approaches. 4th ed. Thousand Oaks: SAGE Publications, 2014.

Flyvbjerg, B. (2011). Case study. In N. K. Denzin and Y. S. Lincoln, (Eds.), SAGE Handbook of Qualitative Research. (4th ed., pp. 301 – 316). Thousand Oaks: SAGE.

Johns, G. (2001). In praise of context. Journal of Organizational Behavior, Vol. 22, No. 1 (Feb., 2001), pp. 31-42. Retrieved from http://www.jstor.org/stable/3649605.

Rousseau, D. M. and Fried, Y.  (2001). Location, location, location: Contextualizing organizational research. Journal of Organizational Behavior, Vol. 22, No. 1 (Feb., 2001), pp. 1-13. Retrieved from http://www.jstor.org/stable/3649603.

Winter, F. (2014). Traditionalists, progressives and the Gone-Away World.” Retrieved from http://words.usask.ca/ceblipblog/2014/10/14/traditionalists-progressives-and-the-gone-away-world/.

Winter, F. (2015). Forest, trees, and underbrush: Becoming the arborist of your own research. Retrieved from http://words.usask.ca/ceblipblog/2015/07/28/forest-trees-and-underbrush-becoming-the-arborist-of-your-own-research/.

Wolff, Virginia. (1924). Mr Bennett and Mrs Brown. London: Hogarth Press.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Walking the (Research Data Management) Talk

by Marjorie Mitchell
Librarian, Learning and Research Services
UBC Okanagan Library

Librarians helping researchers to create data management plans, developing usable file management systems (including file naming conventions), preparing the data for submission into repositories and working through the mysteries of subject-specific metadata schemes are at the forefront of the data sharing movement. All this work leads to research that is more reproducible, more rigorous, has fewer errors, and more frequently cited (Wicherts, 2011) than research that isn’t shared. In addition to those benefits, shared data leads to increased opportunities for collaboration and, potentially, economic benefits (Johnson, 2016). However, are we doing what we are asking our researchers to do and ultimately making our research data available and open for reanalysis and reuse? Are we walking the talk? Or is this the case of the carpenter’s house (unfinished) and the mechanic’s car (needing repair)?

When I’m speaking of data I use Eisner and Vasgird’s description of data as “a collection of facts, measurements or observations used to make inferences about the world we live in” (n.d.) because the research done by librarians consists of wide varieties of data: numerical, textual, photographic images, hand drawn maps, or diagrams created by study participants. Almost all have the potential to be shared openly and to act as a springboard for further research, subject to appropriate ethical considerations.

I started searching to see what data I could find from Canadian librarian researchers in repositories. I have not finished my search, but my early results show some interesting things. To date, this has not been a rigorous study, but more of a curious, pre-research “let’s see what’s out there” browse, and therefore must not be misconstrued as the basis for conclusions. I briefly looked internationally for a few studies and found a wider variety of topics with available datasets than I had found in Canadian repositories, which was what I expected to find.

Two things jumped out at me right away. First, when data is available, it is either from large, national or multi-institutional studies, or it is from studies that have been repeated over time, such as LibQUAL+®. Far fewer institution-specific or single researcher/research team datasets are “available.” Some of those have “request access” restrictions, meaning it may be possible to access the data with permission from the creator, but that is not guaranteed. The second thing I noticed was how difficult it is locate these datasets. Although there is a movement to assign unique and persistent identifiers to datasets, this has not, as yet, translated into a search engine that can comprehensively search for datasets.

I am happy to see a steady increase in the amount of librarian-generated research data being made available. Librarian-generated research is not alone in this trend. It is happening across the disciplines. While little library research is externally funded, it is worth noting some funders are requiring data management plans with the goal of data sharing. Some scholarly journals, particularly in the sciences, have strong policies about data sharing. Each change, minor or major, moves us more toward data that is shared as a matter of course, rather than data shared only reluctantly.

If this all sounds like “just another thing to do” or maybe “I don’t have the skills or interest to do this,” consider research data sharing as an opportunity to partner with another librarian who has those skills but perhaps lacks the research skills you have. Research partners and teams can allow people to contribute their best skills rather than struggling to compensate for their weaknesses throughout the process.

Finally, have a look at the data that is out there just waiting to be reused. Cite it, add to it (if allowed), and share your new results. I am confident this will add greater context to your research and highlight subtleties and nuances that might have remained invisible otherwise.

References

Eisner, R., & Vasgird, D. (n.d.) Foundation Text. In RCR Data Acquisition and Management. Retrieved from http://ori.hhs.gov/education/products/columbia_wbt/rcr_data/foundation/index.html

Johnson, B. (2016). Open Data: Delivering the Benefits. Presentation, London, UK.

Wicherts, J. M., Bakker, M., & Molenaar, D. (2011). Willingness to Share Research Data Is Related to the Strength of the Evidence and the Quality of Reporting of Statistical Results. PLoS ONE, 6(11). doi:hOp://dx.doi.org/10.1371/journal.pone.0026828

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Lessons Learned: The Peer Review Process

by Tasha Maddison
Saskatchewan Polytechnic
and
Maha Kumaran
Leslie and Irene Dubé Health Sciences Library, University of Saskatchewan

We are currently in the final stages of editing a book on distributed learning. We initially received 27 chapter submissions on October 31, 2015 and set up a peer review process shortly thereafter. Each chapter was reviewed by two external reviewers. Our first challenge was to find enough reviewers so that each chapter could be reviewed in a timely manner. We sought reviewers from our immediate network of colleagues and later from acquaintances and individuals that we met at conferences. Finally we had to extend our search and seek out individuals out for that purpose. Once the first few chapters were finalized, we requested assistance from those chapter authors with reviewing chapters, and if they were not available, asked if could they recommended others from their institutions. Reviewers were invited to comment directly on the document and/or provide comments using a template that we provided. It was a learning process for everyone involved; the authors, the reviewers and also the editors. Here is what we have learned thus far in this process:

Reviewers don’t always agree. In cases like these, it is very helpful to have a third opinion and this is where the editors play a critical role. They can ask the following questions and make a decision on the chapter: Does the review seem overly critical, or unjust? Is the reviewer actually providing suggestions help to improve the chapter? Or are they unnecessarily picky? Should the author(s) be given a chance to review and significantly revise their work, or is it feasible to reject it outright?

Lesson Learned: Use your judgement in accepting or intermediating the reviewer’s comments

Reviewers are too nice. There were occasions when reviewers did not make any comments on the document, and/or had only positive comments on the template. Upon reading the same documents, editors had questions or needed clarification.

Lesson Learned: The reviewer’s comments are not the only quintessential element to use towards bettering a chapter.

Reviewers and deadlines. Deadlines don’t mean the same thing to everyone. Some reviewers demonstrate tremendous discipline and always submit their work on time. Others, use deadlines more as a guideline than a hard and fast rule. Editors should count on these potential delays and build in a significant contingency plan for time.

Lesson Learned: Be prepared to be flexible and give reviewers 3 weeks to return their evaluation, but expect at least two weeks lag time for some. Also, build in a time contingency for the entire project.

Reviewers as copyeditors: Reviewers are tempted to take on the role of copyediting when reviewing a text, but the primary job here is to review the content and comment appropriately. The more detailed a reviewers’ suggestions can be, the more helpful it is to the authors, and ultimately, the more successful the final chapter will be. General sweeping statements are not useful. Specific detailed comments are more helpful. If you are a peer reviewer, think of yourself as a most valued intermediary in the process of publishing a chapter. You take the work, and help to elevate it to the next level.

Lesson Learned: Provide reviewers with a template posing specific questions to present their comments and an area where they can include general comments for the editors, which will not be shared with authors.

Rejections after reviewing: Unfortunately rejections are part of the peer review process. It is important that all parties are gracious and respectful if this is the outcome. The reviewers and editors should provide suggestions that strengthen the chapter and have it fit for publication upon revision. The authors should be left feeling that their submission and their participation in the process was worthwhile, and hopefully they too learned a lot.

Lesson Learned: Be prepared to listen to authors’ justifications about their chapter and then make final decisions.

The peer review process, regardless of the fate of the document, should noticeably improve the quality of the final product. Unbiased feedback from experts notes the successes or shortcomings of each chapter’s argument, the validity of results, the flow of the discussion, and the sound foundation of research. All members involved will benefit if they come in with a positive attitude and with a generosity towards accepting criticism.

For more information on the peer review process, check out these recent Brain Work Blog posts:
http://words.usask.ca/ceblipblog/2016/01/12/peer-review/
http://words.usask.ca/ceblipblog/2015/11/17/how-to-be-an-effective-peer-reviewer/

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

The author’s side of peer review

by Kristin Hoffmann
Associate Librarian, University of Western Ontario

In the last few months, Brain-Work has featured two discussions of peer review: How to be an effective peer reviewer and Peer reviewing as a foundation of research culture, both aimed at librarians who might be serving as reviewers. In this post, I want to look at peer review from the perspective of the author who is reading and responding to peer reviewers’ feedback.

I get butterflies in my stomach every time I see the subject line in my inbox announcing an email that contains reviewer comments. Reading reviewer feedback feels like the closest I come these days to getting a grade back on a test or an essay, and I still desperately want that A. What I have increasingly come to realize is that reviewers’ feedback isn’t going to determine my final grade in the course, and that it can really be a process of giving supportive and formative feedback.

Here are some suggestions I have that will hopefully make the process of reading and responding to peer review feel less daunting and more supportive:

1. Ask someone else to read your paper before you submit it. It’s always a good idea to get a fresh perspective on your work. Also, getting feedback from someone you know will help prepare you for getting more feedback from the reviewers.

2. When you get the reviewers’ comments, particularly if they include lots of suggestions for revision, let yourself complain and vent about it – for a day. Then put the complaining behind you and move on.

3. Remember that the reviewers’ feedback is intended to improve your paper. Read it with that in mind. In my experience, reviewers have always provided at least one helpful suggestion. (Exception: a review that says simply “this was terrible and shouldn’t ever be published.” That review isn’t going to improve your paper, so go ahead and complain about that terrible review that should never have been written, and then move on.)

4. You don’t necessarily need to take all of the reviewers’ suggestions or address all their questions. The reviewers don’t know your research as well as you do, and it may be that their suggestions would change the focus of your paper beyond what you intended. It could also be that they’re asking for changes because they didn’t clearly understand your intent as you had presented it in the paper—and that should be a sign to you that you need to change something, even if the change is perhaps not exactly what the reviewers asked for.

5. Stay in contact with the editor. Let them know that you are working on changes. If the editor had sent a “revise and resubmit” decision and you’ve decided not to resubmit, let them know that too. Ask the editor for advice if the reviewers’ suggestions aren’t clear, or if the reviewers have provided conflicting suggestions.

For more advice about reading and responding to peer review, the following offer more good suggestions:

Annesley, Thomas M. 2011. “Top 10 Tips for Responding to Reviewer and Editor Comments.” Clinical Chemistry 57 (4): 551–54. doi:10.1373/clinchem.2011.162388.

McKenzie, Francine. 2009. “The Art of Responding to Peer Reviews.” University Affairs. http://www.universityaffairs.ca/career-advice/career-advice-article/the-art-of-responding-to-peer-reviews/

The Open Source Paleontologist. 2009. “Responding to Peer Review.” http://openpaleo.blogspot.ca/2009/01/responding-to-peer-review.html

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

What if we talked about capacity for research, not research competencies?

by Selinda Berg
Schulich School of Medicine – Windsor Program
Leddy Library, University of Windsor
University Library Researcher in Residence, University of Saskatchewan

[For the first time, I am making a blog post that puts out ‘there’ some ideas that I have been working on, which will hopefully evolve into a published paper. In the past, I have been someone that has been hesitant to blog in this way, but I am pushing myself in a new direction.]

Lately, I have been studying and contemplating multiple theoretical frameworks that span the humanities, social sciences, and health sciences. Reading these works and their framing of complex and changing environments have led me to question the current emphasis on competencies in librarianship. In this project, I am considering the ways that librarianship may benefit from making a shift away from its focus on competencies and move towards adopting the concept of capacity. Whereas competency focuses on the abilities, knowledge, and skills to successfully complete a task, capacity is the faculty or potential for experiencing, appreciating, and adapting. Capacity is about growth: growth of the individual in knowledge and experience.

We have recently seen an influx of documents addressing the competencies of librarians, including the research competencies of librarians (for example: see CARL, 2007). While these documents have value, they are one piece of a much larger puzzle. In this post, I want to consider the ways in which a shift towards a focus on capacity for research may initiate positive changes in our understanding and approach to research:

Embrace research as a learning process: There is no one static set of skills or abilities that will prepare someone to “do” research. The abilities, skills, and knowledge that I have gained by completing my PhD will not “set me up” for my next research study or for the research project that I undertake after that. I will have to learn new methods, try out new technologies, consider new theoretical frameworks, and certainly evolve my ideas. Research does not require a static set of skills and abilities (competencies), but rather the ability to continually evolve in our knowledge and abilities (capacity). As librarians, most of us have taken one, two, or three research methods courses. However published research, formal and informal conversations, and personal experience suggests that this framework of skills has not fully prepared us to successfully undertake research. We need to reframe our thinking and acknowledge that our greatest strength is our curiosity and our ability to evolve.

Encourage a research program: As noted above, research is not Rinse and Repeat. Our goal should not be to repeat the research that we have done before, but rather to develop a research program that evolves our ideas, builds off of our results, delves deeper into issues, and looks at questions at different angles and through different lenses. The realization that our success relies not on our current set of skills but our ability to evolve and grow our understandings will encourage us to push further and delve deeper into a topic and in turn, develop a strong program of research.

Empower librarians to know that we can: Embracing the idea that we have the ability to learn, to grow, and to adapt will move us away from conversations (within and outside of the profession) that focus on “Librarians were not trained to be researchers,” “Librarians do not have PhDs,” and “Librarians don’t have the skills to do research.” We need to embrace the notion that we can evolve and transform to meet the challenges presented by new research opportunities and we must take the time for these processes to take place. All researchers have to dedicate significant time to exploring and learning the context of a topic, to explore the wide of array of possible techniques for study, and to consider the way in which they can contribute a new understanding of a topic. It is quite possible that our first projects will not have the perfect research question, method, instrument or theoretical frameworks but from that, we should be motivated and inspired to learn and grow—to tap into our capacity.

Research success relies on more than a set of skills: While the skills and abilities to do research are important, capacity recognizes that there are factors at play beyond skills. Personal commitment, institutional commitment, resources to support research, and the allocation and dedication of time to transform and evolve are potentially as important or more important factors in fulfilling both personal capacity and institutional capacity for research. Capacity is the potential to grow and experience but it is critical to realize that potential requires more than a set of skills to complete a task.

Competencies are the skills we need to complete a task. But research is not a task, it is a process. Librarians, in all areas of their professional responsibilities, transform and evolve to meet the needs of the new challenges and opportunities. As librarians, we need to recognize that our biggest asset is our ability to learn and to grow–our capacity for research.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Futures Studies: What is it, and how can it be ‘evidence-based’ research?

by Tegan Darnell
Research Librarian
University of Southern Queensland, Australia

In March 2015, I started as a student in the Doctorate of Professional Studies (DPST) program. I wanted to find out why librarians are ‘doing’ information practice so far behind what is relevant in the current information environment. Obviously, we are all at different places and have different strengths in regards to our professional practice, but generally, as a group, librarians are, well, behind the information use of our clientele. Just admit it.

Scholarly communication has been transformed. The world in which information professionals operate has been disrupted, and embracing these changes allows for a much broader scope for the roles we play. I wanted, really, to shake things up. After reading tonnes of the literature, debating with myself, and arguing with the DPST Program Director about how I was going to address the problem, I was introduced to causal layered analysis (CLA).

CLA is a ‘futures studies’ methodology which was introduced by Sohail Inayatullah in 1998. The original paper can be found here. Professor Inayatullah is a practitioner of futures studies, the interdisciplinary study of postulating possible, probable, and preferable futures. But how can this possibly be scientific? I mean, how can it be possible to collect evidence from a future that hasn’t happened yet? It is a paradox which has not been ignored by practitioners.

Futures studies is a growing transdisciplinary field which has embraced such fields as systems thinking, education, hermeneutics, macrohistory, sociology, management, ecology, literature, ethics, philosophy, planning and others. It is an integrated field ‘with many lines of inquiry weaved together’ to create a complex whole (Ramos 2002).
The discipline uses a systematic and pattern-based approach to analysing the sources, patterns, and causes of change and stability in the past (history, economics, political science) and present (sociology, economics, political science, critical theory) in an attempt to develop foresight and determine the likelihood of future events and trends.

De Jouvenel (1965), an early futures theorist, likened forecasting or ‘the art of conjecture’ to the science of the meteorologist. Weather forecasts can be prepared reasonably accurately for each of the next few days. A forecast for more than a month in advance can be based on patterns, such as normal temperatures and precipitation, and other factors which may affect these in relation to the average. There is no way for a meteorologist to, with any certainty, say what the minimum and maximum temperature and precipitation levels on a particular day one month in the future will be. The meteorologist may, however, be able to say that it is likely that we will have above average rainfall, or that temperatures will be below average. A futures study considers patterns of power and privilege, social institutions, religion, and history, to postulate possible future states that may recur.

The causal layered analysis method, specifically, is not used to predict the future, but rather to create ‘transformative spaces for the creation of alternative futures’ (Inayatullah 1998). It is an action research method for increasing the probability of a preferred future by examining the problems, systems, worldviews and myths of the present. It is about human agency – using what we know about the past, to act in the present, in order to create/shape the future we would like to see.

Just imagine librarians in your own workplace, critically examining their own current problems, existing systems, worldviews, and subconscious myths and mythologies, to transform their practice. Perhaps you are starting to see why I decided to use the causal layered analysis method in my research.

I’m currently preparing for Confirmation of Candidature. Professor Inayatullah has agreed to be one of my supervisors. I think that makes me a *ahem* futures theorist.

If you are interested in finding out more I recommend this article by Professor Inayatullah on Library Futures published in The Futurist magazine.

References:

Inayatullah, S 1998, ‘Causal layered analysis: Poststructuralism as method’, Futures, vol. 30, no. 8, pp. 815–829.

De Jouvenel, B 1965, The Art of Conjecture, Trans. by Nikita Lary. Weidenfeld and Nicholson, London.

Ramos, JM 2002, ‘Action Research as Foresight Methodology’, Journal of Futures Studies, vol. 7, no.1, pp. 1-24.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

What “counts” as evidence of impact? (Part 1 of 2)

by Farah Friesen
Centre for Faculty Development (CFD)
University of Toronto and St. Michael’s Hospital

I work at the Centre for Faculty Development (CFD), a joint partnership between the University of Toronto and St. Michael’s Hospital, a fully affiliated teaching hospital. CFD is composed of educators and researchers in the medical education/health professions education field.

As a librarian who has been fully integrated into a research team, I have applied my training and skills to different aspects of this position. One of the areas for which I am now responsible is tracking the impact of the CFD.

While spending time tracking the work that CFD does, I have started to question what “counts” as evidence of impact and why certain types of impact are more important than others.

So what exactly is impact? This is an important question to discuss because how we define impact affects what we count, and what we choose to count changes our behaviour.

We are all familiar with the traditional metrics that count as impact in academia: 1) research, 2) teaching, and 3) service. Yet the three are not treated equally, with research often given the most weight when it comes time for annual reviews and tenure decisions.

What we select as indicators of impact actively shapes and constrains our focus and endeavours. If research is worth the most, might this not encourage faculty to put most of their efforts into research and less into teaching or service?

Hmm… does this remind us of something? Oh yes! Our allegiance to research is strong and reflected in other ways. Evidence-based practice also purports a “three legged stool” comprising 1) research evidence, 2) practice knowledge and expertise, and 3) client preferences and values,1 but research is often synonymous with evidence and most valued out of the three types of evidence that should be taken into consideration in EBP.2,3 It is not accidental that what is given most weight in academia is the same as what is given most weight in EBP: research. We have established similar hierarchies of what counts that permeate our scholarly work and our decision-making in practice!

Research impact is traditionally tracked through number of grants, publications, and citations (and maybe conference presentations). Attention to altmetrics is growing, but altmetrics tends to track these very same traditional research products, but speeds up the time between production and dissemination (the actual use or impact of altmetrics is a whole other worthy discussion).

Why is it that an academic’s impact (or an academic department’s impact) is essentially dependent on research impact?

There are practical reasons for this of course: research productivity influences an institution’s academic standing and influences the distribution of funding. As an example of the former, one of the performance indicators used by the University of Toronto is research excellence,4 and is based on comparing the number of publications and citations generated by UofT faculty (in sciences) to faculty at other Canadian institutions. For an example of the latter, one can refer to the UK’s Research Excellence Framework (REF) which assesses “the quality of research in UK higher education institutions”5 and allocates funding based on these REF scores.

While these practical considerations cannot be ignored, might it not benefit us to broaden our definition of impact in education scholarship? (Note that the comparisons of research excellence above are based on sciences faculty only. We must think critically about the type of metrics that are appropriate for different fields/disciplines).

This is tied to the question of the ‘value’ and purpose of education. What is it that we hope to achieve as educators and education researchers? The “rise of measurement culture”6 creates the expectation that “educational outcomes can and should be measured.”6 But those of us working in education intuit that there are potentially unquantifiable benefits in the work that we do.

  • How do we account for the broad range of educational impacts that we have?
  • How might we better capture the complex social processes/impacts in education?
  • What other types of indicators might we choose measure, to ‘make count’ as impact, beyond traditional metrics and altmetrics?
  • How do we encourage researchers/faculty to start conceiving of impact more broadly?

While considering these questions, we must be wary of the pressure to produce and play the tracking ‘game,’ lest we fall into “focus[ing] on what is measurable at the expense of what is important.”7

Part 2 in June will examine some possible responses to the questions above regarding alternative indicators to help (re)define educational impact more broadly. A great resource for further thoughts on the topic of impact and metrics: http://blogs.lse.ac.uk/impactofsocialsciences/

I would like to thank Stella Ng and Lindsay Baker for their collaboration and guidance on this work, and to Amy Dionne and Carolyn Ziegler for their support of this project.

  1. University of Saskatchewan. What is EBLIP? Centre for Evidence Based Library & Information Practice. http://library.usask.ca/ceblip/eblip/what-is-eblip.php. Accessed Feb 10, 2016.
  2. Mantzoukas S. A review of evidence-based practice, nursing research and reflection: levelling the hierarchy. J Clin Nurs. 2008;17(2):214-23.
  3. Mykhalovskiy E, Weir L. The problem of evidence-based medicine: directions for social science. Soc Sci Med. 2004;59(5):1059-69.
  4. University of Toronto. Performance Indicators 2014 Comprehensive Inventory. https://www.utoronto.ca/performance-indicators-2014-comprehensive-inventory. Accessed Feb 10, 2016.
  5. Higher Education Funding Council for England (HEFCE). REF 2014. Research Excellence Framework. http://www.ref.ac.uk/. Accessed Feb 10, 2016.
  6. Biesta G. Good education in an age of measurement: on the need to reconnect with the question of purpose in education. Educ Assess Eval Acc. 2009; 21(1): 33-46.
  7. Buttliere B. We need informative metrics that will help, not hurt, the scientific endeavor – let’s work to make metrics better. The Impact Blog. http://blogs.lse.ac.uk/impactofsocialsciences/2015/10/08/we-need-informative-metrics-how-to-make-metrics-better/. Published Oct 8, 2015. Accessed Feb 10, 2016.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Evidence for Big Deal Decisions: The Importance of Consultation

by Kathleen Reed
Assessment and Data Librarian, Vancouver Island University

As the loonie tanks against the USD, my place of work finds itself in the same situation of many libraries – needing to make cuts to make up for the shortfall and/or beg admin for more money. Inevitably, this means talking about Big Deals, “an online aggregation of journals that publishers offer as a one-price, one size fits all package” (Frazier, 2001). Are they worth the cost? And if they’re worth it on a cost-per-title basis, are they still worth it when you factor in how much of our budget gets eaten up by them? Are they worth it when this is an unsustainable business model controlled by a handful of major publishers? These questions have been on the forefront of my mind as I run the numbers on Big Deal packages.

If you’re looking for a good introductory article to assessing Big Deals, I recommend “Deal or No Deal? Evaluating Big Deals and Their Journals” by Blecic et al. (or you can read the EBLIP Evidence Summary of the article). However, like much of the literature on the subject of evaluating Big Deals, it’s written from a quantitative perspective, and places great emphasis on cost-per-use data. Relying so heavily on one metric has always made me uncomfortable. How fortuitous, then, that a recent trip to the 2015 Canadian Library Assessment Workshop (CLAW) included a very interesting presentation on Big Deal – “Unbundling the Big Deal” by Dr. Vincent Larivière and Stéphanie Gagnon at the Université de Montréal, and Arnald Desrochers at Université du Québec à Montréal. Both institutions had recently undertaken large-scale analyses of their periodicals collections, led by Dr. Larivière.

In addition to quantitative analysis of COUNTER JR1 (Number of Successful Full-Text Article Requests) data and citations, there was a survey sent to faculty, post-docs, and grad students. This survey asked for a list of the 10 most important journal titles for the respondent’s research and teaching, and 5 most important to the field of study. At U. de M., 2,213 people responded to the survey, and what they said was the stunning part of this presentation: 50% of the journal titles listed by respondents as critical to their research and teaching, and their disciplines, didn’t show up as essential titles in the COUNTER reports and citation analyses. If librarians had simply relied on quantitative data to break up a Big Deal, they would have missed out on a significant number of titles the faculty, post-docs, and grad students deemed essential!

While there’s lots to unpack on the subject of why such a high number of journals are deemed essential but aren’t showing up above the “cut” threshold line in JR1 (i.e. they’re not being heavily used), this one finding should give librarians pause. A good deal of the research that’s been done on describing ways to best make evidence-based choices related to Big Deals off-handedly mention that faculty should be consulted, but Dr. Larivière’s research has me convinced that this consultation needs to be rigourous and not an after-thought.

The presentation also had me once again appreciating the value qualitative research brings to library assessment. The literature on Big Deals is mainly based on quantitative analysis of usage reports, and Dr. Larivière’s research makes it clear that librarians cannot rely solely on this type of data (especially simplistic cost-per-article data) for a thorough analysis of Big Deals. If we do, we risk misunderstanding the needs of faculty, post-docs, and grad students.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.