Late Summer Break

by Virginia Wilson
Director, Centre for Evidence Based Library and Information Practice

A week of August is gone already and soon we’re all going to lament, “Where did the summer go??” Or, if you’re in the land down under, you might cheer, “Yay, winter is over!” Regardless, Brain-Work is taking a bit of a break for the remainder of August and will return in September with our current roster of blog writers eager and ready to go. We’ll have some familiar faces as well as some new additions to the lineup. In 2016/17, we’re going to be blogging like never before! If you’ve always wanted to try your hand at writing a blog post but don’t want the commitment of your own blog, consider writing for Brain-Work. You can contact me to sign up or if you have any questions. Info for contributors is here: Brain-Work info.

A reminder that the C-EBLIP Research Network is in recruitment mode. The Research Network is an international affiliations of institutions who support librarians as researchers and/or are interested in evidence based library and information practice. You can find out more information or fill out a form to join here: C-EBLIP Research Network.

And the C-EBLIP Fall Symposium: Librarians as Researchers is set to go on Wednesday, October 12, 2016. Set in the lovely Paris of the Prairies, Saskatoon, Saskatchewan, the third annual symposium features a networking breakfast, an international keynote speaker, a single track session, and lots of time for meeting, greeting, talking, and sharing. Registration will open closer to the end of August, so plan now to attend. You can find out all kinds of Symposium stuff here: C-EBLIP Fall Symposium

Teaser: Selinda Berg and I are cooking up an exciting new C-EBLIP Research Network initiative. I can’t say much more at this point, but be on the look out for something new coming this fall.

What “counts” as evidence of impact? (Part 2 of 2)

by Farah Friesen
Centre for Faculty Development (CFD)
University of Toronto and St. Michael’s Hospital

In February’s post, I proposed a critical look at what counts as evidence of research impact, beyond traditional metrics (grants, publications, and peer-reviewed conference presentations). I work specifically in the medical/health professions education context and so wanted to find alternative indicators, beyond traditional and altmetrics. Below, I will share some of these resources with you.

Bernard Becker Medical Library Model for Assessment of Research Impact.1 The Becker Library Model advances 5 pathways of diffusion to track biomedical research impact:
1. Advancement of knowledge
2. Clinical Implementation
3. Community Benefit
4. Legislation and Policy
5. Economic Benefit
Each of these 5 pathways have indicators (some indicators are found in more than one pathway). While the Becker Library Model includes traditional indicators, they suggest some novel impact indicators:2
• valuing collaborations as an indicator of research output/impact
• tracking data sharing, media releases, appearances, or interviews, mobile applications/websites, and research methodologies as evidence of impact
This Model has great indicators to consider for biomedical research impact, but many of the indicators do not apply to medical/health professions education (e.g. Patents, quality of life, clinical practice guidelines, medical devices, licenses, etc).

Kuruvilla et al (2006)3 developed the Research Impact Framework (RIF) as a way to advance “coherent and comprehensive narratives of actual or potential research impacts” focusing on health services research. The RIF maps out 4 types of impact:
1. Research-related impacts
2. Policy impacts
3. Service impacts
4. Societal impacts
Each type of impact area has specific indicators associated with it. Novel indicators include: Definitions and concepts (e.g. the concept of equity in health care financing), ethical debates and guidelines, email/listserv discussions, media coverage. RIF suggests many indicators applicable to non-clinical/biomedicine disciplines.

Tracking collaborations (Becker) and email/listserv discussions (RIF) as research impact, I started to wonder what other types of research dissemination activities we might not have traditionally counted, but which are, in fact, demonstrative of impact. I have coined a term for this type of indicator: Grey Metrics.

Grey metrics denote metrics that are stumbled-upon and serendipitous, but for which there is not really a systematic way to track. These can include personal email asks or phone conversations that actually denote impact. I call it “grey metrics” because it’s kind of like grey literature searching. Grey metrics might include:
• slide sharing (not in repository, but when it’s a personal ask)
• informal consultations (e.g. through email, about a topic or your previous research. These email connections can sometimes inform other individuals’ research – sometimes even for them to develop projects that have won awards. So even if the consultations are informal via email, it shows how one’s research and guidance has an impact!)
• service as expert on panels, roundtables (shows that your research/practice expertise and knowledge are valued)
• curriculum changes based on your research (e.g. if your research informed curriculum change, or if your paper is included in a curriculum, which might lead to transformative education)
• citation in grey literature (e.g. mentions in keynote addresses, other conference presentations)

An example of grey metrics: my supervisor (Stella Ng, Director of Research, CFD) and colleague (Lindsay Baker, Research and Education Consultant, CFD) developed a talk on authorship ethics. One of the CFD Program directors heard a brief version of this talk and asked for the slides. That Program director (who also happens to be Vice-Chair of Education for Psychiatry at the University of Toronto) has shared those slides with her department and now they are using the content from those slides around authorship ethics to guide all their authorship conversations, to ensure ethical practice. In addition, Stella and Lindsay developed an authorship ethics simulation game to teach about authorship ethics issues. A colleague asked for this game to be shared and it has now been used in workshops at other institutions. These were personal asks from a colleague, but which demonstrate impact in terms of how Stella and Lindsay’s research is being applied in education to prepare health professionals for ethical practice in relation to authorship. Tracking these personal requests builds a strong case of impact beyond traditional metrics or altmetrics.

There is interesting work coming out of management learning & education4 and arts-based research5 examining different ways to think about impact. The National Information Standards Organization (NISO) is also working on identifying/defining alternative outputs in scholarly communications and appropriate calculation methodologies.6 The NISO Phase 2 documents were open for public comment (to April 20, 2016), but are now closed, but check the website for their revised documents.

As we work on broadening our conceptions of what counts as research impact, we must try to resist the urge to further quantify our achievements (and worth) as researchers. These blog posts are not meant to be prescriptive about what types of indicators to track. I want to encourage researchers to think about what indicators are most appropriate and align best with their context and work.

We must always be cognizant and vigilant that the time we spend tracking impact could often be better spent doing work that has impact.

References:
1. Becker Medical Library. Assessing the Impact of Research. 2016. Available at: https://becker.wustl.edu/impact-assessment. Accessed July 20, 2016.
2. Becker Medical Library. The Becker List: Impact Indicators. February 04, 2014. Available at: https://becker.wustl.edu/sites/default/files/becker_model-reference.pdf. Accessed July 20, 2016.
3. Kuruvilla S, Mays N, Pleasant A, Walt G. Describing the impact of health research: a Research Impact Framework. BMC Health Services Research. 2006;6(1):1. doi:10.1186/1472-6963-6-134
4. Aguinis H, Shapiro DL, Antonacopoulou EP, Cummings TG. Scholarly impact: A pluralist conceptualization. Academy of Management Learning & Education. 2014;13(4):623-39. doi:10.5465/amle.2014.0121
5. Boydell KM, Hodgins M, Gladstone BM, Stasiulis E, Belliveau G, Cheu H, Kontos P, Parsons J. Arts-based health research and academic legitimacy: transcending hegemonic conventions. Qualitative Research. 2016 Mar 7 (published online before print). doi:10.1177/1468794116630040
6. National Information Standards Organization. Alternative Metrics Initiative. 2016. Available at: http://www.niso.org/topics/tl/altmetrics_initiative/#phase2. Accessed July 20, 2016.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Personality types and the 360° survey for professional development, Or “Being a Bulldozer and Still Winning Hearts”

by Tegan Darnell
Research Librarian
University of Southern Queensland, Australia

This short article is about how evidence-based practice applies on a personal/individual level, and how I’m using the outcomes of survey tools for reflective professional development.

As part of an ongoing leadership development program, I have completed a Myers-Briggs Type Indicator® (MBTI), and the slightly less well known 360° Life Styles Inventory™ (LSI). Both are evidence-based and meet rigorous academic and psychometric standards.

Although reluctant to be categorised, I have committed to ‘develop’ my practice and become a better and more effective leader. I endeavour to take what I have learned from this and use it in my practice.

The MBTI® told me I am an ENTJ type, or ‘the commander’, closely correlated with the ‘Fieldmarshal’ in the Keirsey Temperament Sorter (KTS). An ENTJ can be dictatorial, abrasive, and insensitive. Notable ENTJ types include Margaret Thatcher, Vladimir Putin, Steve Jobs, and Gordon Ramsey (the British chef that swears at people a lot).

It isn’t all bad… Basically, an ENTJ is a natural born leader with a ‘forceful’ personality. Also, Intuition (N) types have been shown to have significantly higher ego development (Vincent, Ward, & Denson 2013) – Apparently that is a good thing.

Last time I took the same test I came out as an INFJ, or ‘the advocate’, (the ‘counsellor’ according to the KTS) so my new result was somewhat of a shock. As a committed researcher-practitioner, however, I have to accept what the data is telling me. Quite a lot of time has passed since I last took the full questionnaire…

However, it was the 360° survey that was the most revealing.

In a 360° degree survey not only do you complete a survey about your behaviours, but so does your boss, your direct reports, and peers and colleagues. The differences between your self-evaluation and the perceptions others have of you are revealing.

I am a human bulldozer.

My colleagues rated me as unhealthily competitive, approval seeking, and lacking in actual performance. Apparently I disregard others’ feelings, and come across and cold and insensitive. Positive areas include: My colleagues see me as an independent and unconventional colleague, and I am nowhere near as avoidant as I thought I was.

Sometimes, the evidence does not say what you want it to say. When the data is about a beloved Library service or resource this is hard to take. When the data is about your personal performance and behaviour, this can be particularly difficult to reconcile. But, as will all research, I have asked the question, I have collected the data, and I have the results. Now, with this information, I need to make a decision about what action I take.

An ENTJ appreciates and welcomes objective and rational statements about what they do well and what could be done better. Criticisms, to me, mean: “Here is your challenge. Do whatever is in your power to make it happen”. So, I have accepted this challenge.

Being a bulldozer was never my intention, but if I am a bulldozer, I’ll be a bulldozer with friends, thank you. I’ll be working on my ‘encouraging’ and ‘humanistic’ behaviours, and doing lots of open communication (ie. ‘listening’) over the next few weeks.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Member-Driven Decision Making

by Gwen Schmidt
Manager, Branches, Saskatoon Public Library

How many librarians does it take to change a lightbulb? If you can believe it, there are at least four different answers to this joke. My favourite is “Only one. But first you have to have a committee meeting.”

I have just finished a two-year term as the President of the Saskatchewan Library Association (SLA), following a period of previous involvement on the Board. It has been a fascinating time of change in our organization’s history, with a reshaping of our direction, our governance, and our style of decision making. I was glad to be a part of it.

One of the most interesting things about this time of change was the renewed SLA commitment to a member-driven philosophy. In 2010, the SLA membership determined that our Board structure needed an overhaul, and a Board Governance Task Force was struck. The Task Force took a look at our history, our values, our goals, and the challenges ahead, and set us on a new path with a new vision – central to which was the idea that our members would lead us.

We were always trying to be member-driven, but this renewed commitment to that idea came at a time when cheap/free consultative software tools exist in abundance, and when social media has given individuals the expectation that they can have their say easily. It was easier than ever before to make ‘member-driven decision making’ a reality.

My presidency fell during a time when strategic planning needed to be done. Instead of just doing it at the Board level, we did a broad survey of the members to find out what they think SLA should be doing, what they need from their library association, and what they think we do well. Once we had that data, we also took the opportunity at the annual conference to have conversations with people in person about it. We took broad themes out of the survey data, and had an in-person membership consultation where people could expand on those themes. All of that consultation helped us to build a robust strategic plan that is taking us forward.

During the same time, provincial and territorial and national library associations across Canada were considering the building of a new national federation together, which ultimately became the Canadian Federation of Library Associations (CFLA). SLA was invited to participate. Our member-driven philosophy set a road-map for us: before committing to participation, we took the question to our members. Did they want us to go forward as part of that federation? If yes, within what parameters? Our members gave us a resounding mandate, and endorsed a set of parameters going forward. Consultation with them throughout the process of building the CFLA identified a problem to be solved with a shared Saskatchewan-Manitoba CFLA Prairie Provinces Representative position. Knowing what our members wanted allowed us to set up a Saskatchewan-Manitoba working group, to determine the structure of the Prairie Provinces rep, to ensure strong communication and representation.

In associations, ‘member-driven decision-making’ sounds a little – or a lot – like evidence-based decision making. Instead of doing what we think they want us to do, we ask them what they want us to do and then do that. Those collaborative conversations take time, but ultimately build trust and energy, and give better results in the end.

How many member perspectives does it take to make an association truly shine? A heckuvalotta them. But that makes the future bright.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

The Problem with the Present: C-EBLIP Journal Club, June 21, 2016

by Stevie Horn
University Archives and Special Collections, University of Saskatchewan

Article: Dupont, Christian & Elizabeth Yakel. “’What’s So Special about Special Collections?’ Or, Assessing the Value Special Collections Bring to Academic Libraries.” Evidence Based Library and Information Practice [Online], 8.2(2013): 9-21. https://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/19615/15221

I was pleased to have the opportunity to lead the last C-EBLIP Journal Club session of the season. I chose an article which looked at the difficulties in employing performance measures to assess the value of a special collection or archives to the academic library. The article has some failings in that it is written largely from a business perspective, and uses special collections and archives interchangeably in a way that becomes problematic if you consider the archives’ responsibility as a repository for institutional records (which go through many different phases of use)—however, it did serve as a useful springboard for our talk.

What interested me was that those present immediately latched on to the problem of “What about preservation value?” when considering the article’s model of measuring performance. The article poses that the best way to measure a special collection/archives’ “return on investment” is not simply by counting the number of times an item is used (a collection-based method), but rather by reporting the number of hours a user spends working with an item, and what the learning outcomes of that use are determined to be (a user-based method) (Dupont and Yakel, 11).

In some ways, a user-centric approach to measuring performance in archives and special collections makes good sense. A single researcher may spend five weeks exploring fifteen boxes, or taking a close look at a single manuscript, and so recording the user hours spent may prove a more accurate measure of use. To reinforce this, there are a number of difficulties in utilizing collection-based metrics with manuscript collections. Individual documents studied within an archival collection are almost impossible to track. Generally a file is treated as an “item”, and the number of files in a box might be averaged. The article points out, accurately, that this imprecision renders collection-based tabulation of archival documents, images, and ephemera virtually “meaningless” (Dupont and Yakel, 14).

However, if the end goal is determining “return on investment”, user-centric data also leaves out a large piece of the picture. This piece is the previously mentioned “preservation value”, or the innate value in safeguarding unique historical documents. Both collection-based and user-based metrics record current usage in order to determine the value of a collection at the present time. This in-the-present approach becomes problematic when applied to a special collections or archives, however, for the simple reason that these bodies not only preserve the past for study in the present, but also for study in the distant future.

To pull apart this problem of using present-based metrics to measure the worth of a future-purposed unit of the academic library, we look at the recent surge in scholarship surrounding aboriginal histories. As Truth and Reconciliation surfaces in the public consciousness, materials which may have been ignored for decades within archival/special collections are now in high demand. Questions of this nature accounted for approximately forty percent of our usage in the last month alone. Had collections-centric or user-centric metrics been applied for those decades of non-use, these materials would have appeared to be of little worth, and the special collections/archives’ “return on investment” may also have been brought into question. The persistence of archives and special collections in preserving unique historic materials regardless of patterns of use means that these materials can play a role in changing perspectives and changing lives nationwide.

If, as Albie Sachs says in his 2006 article on “Archives, Truth, and Reconciliation”, archives and special collections preserve history “for the unborn . . . not, as we used to think, to guard certainty [but] to protect uncertainty because who knows how the future might use those documents”, might not the employment of only present-centric metrics do more damage than good? (Sachs, 14). And, if the value of an archives or special collections cannot be judged solely in the present, but must take an unknown and unknowable future into account, perhaps the formulation of a truly comprehensive measure of “return on investment” in this field is impossible.

Sources:
Dupont, Christian & Elizabeth Yakel. “’What’s So Special about Special Collections?’ Or, Assessing the Value Special Collections Bring to Academic Libraries.” Evidence Based Library and Information Practice [Online], 8.2(2013): 9-21. https://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/19615/15221

Sachs, Albie. “Archives, Truth, and Reconciliation”. Archivaria , 62 (2006): pp. 1 -14.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Lessons Learned: A Book Editing Collaboration

by Maha Kumaran
Leslie and Irene Dubé Health Sciences Library, University of Saskatchewan
and
Tasha Maddison
Saskatchewan Polytechnic

Recently Maha and Tasha (M/T) had an opportunity to collaborate on a major research project – editing a book. The book is entitled Distributed Learning: Pedagogy and Technology in Online Information Literacy Instruction and is expected to be published in October 2016 by Chandos Information Professional Series, an imprint of Elsevier.

Editing a book is a massive, arduous, and time-consuming project that typically extends over a long period of time. As editors of this project, M/T originally initiated conversations with the publishers in October of 2014. The book proposal was accepted in January 2015 and the final manuscript was submitted in May 2016. The book is now in the safe custody of the publisher undergoing copy-editing and production.

Collaboration has its merits and learning moments. This post reflects on the merits of M/T’s collaboration and what they learned from working together.

There is lots to do:
• Initiating the project and connecting with various groups of people
These groups include your institutional ethics office, your fellow editor(s), and people at the publishing house. You need to finding reviewers for your initial book proposal, chapter authors, and peer reviewers. For this project there are 22 chapters and 44 contributors, so you can imagine how many emails were sent and responded to.
• Deadlines to deal with
Deadlines for abstracts and chapters from authors and feedback from peer reviewers on each chapter. Then you need to work with the publishing house until the final manuscript is submitted.
• Corresponding throughout the project
Corresponding about copyrighted material within chapters and contributor agreements with authors, negotiating the contract with the publisher, collaborating with your co-editor almost on a daily basis, sending acknowledgements to chapter authors on receiving their work, writing letters to the peer reviewers, sending their feedback to authors, and all the while remembering to keep confidential information in check, etc.

Merits of this Collaboration – Strengths of M/T Combined:
After submitting the final manuscript, M/T appreciated working together on this project. Neither one could have completed this task without the others’ help. Tasha’s expertise in communicating in a timely fashion with empathy (especially when a chapter had to be rejected), her ability to nudge others gently with reminders, and her positive attitude throughout the project is a huge skill set.

Maha’s prior experience with publishing, writing, and co-editing proved invaluable throughout the process especially when negotiating the contract with the publisher and anticipating critical next steps. Maha is also a good editor and Tasha looked to her for advice on many issues throughout the peer review process.

What did we learn?
• Find someone that complements your skills and hopefully shares your research interests – easier said than done!!
• Communicate! Communicate! Communicate! If you cannot answer an email immediately, acknowledge receipt and let them know you will respond soon. M/T communicated primarily through email, but also met in person, phoned each other, and sent text messages.
• Be prepared. Last minute issues will occur: an author might pull out too late, may decide not to submit the chapter, and may not accept your revision suggestions. Learn to remain nimble and adapt accordingly.
• Sometimes things will fall apart, something won’t meet your expectations, events won’t happen on time, issues won’t get resolved the way you want them to be resolved. Get over it and move on!! See the big picture.
• If you find someone who is easy to work with and you form a successful team – hang onto them! This doesn’t happen every day and it is truly special!

Other Brain-Work posts on collaboration:
http://words.usask.ca/ceblipblog/2015/06/09/collaborating-for-research-experiences-and-lessons-learnt/
http://words.usask.ca/ceblipblog/2014/08/19/to-boldly-go-the-research-collaboration/
http://words.usask.ca/ceblipblog/2015/04/21/co-authoring-shared-work-%E2%89%A0-less-work/
http://words.usask.ca/ceblipblog/2015/09/01/co-authoring2/

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Apparently Conferencing via Twitter is a Thing Now

by Christine Neilson
Information Specialist, St. Michael’s Hospital
Toronto, Ontario

Back in April, I saw a blog post about holding an entire conference – The World Seabird Twitter Conference (#WSTC2) – over Twitter. I know there’s usually tweeting happening at conferences, but holding the conference itself via Twitter? The conference was set up so that participants were given 15 minutes and a maximum of 6 tweets to present their work. Naturally, the audience could tweet their comments and questions, too. It blew my mind. And it got me thinking about the benefits and drawbacks of such a plan.

The first benefit of this form of presentation seems obvious; with no travel and a platform that’s freely accessible, participation is free. The downside with this is that – I think – conference content is only one of the reasons people go to conferences, and the face-to-face payoff is missing. Yes, you can theoretically “meet” new people via Twitter, but meeting new colleagues at social functions, having coffee with old colleagues you haven’t seen in a while, visiting vendor reps, and having an excuse to travel are all important parts of conference-going.

In terms of the format, WSTC speakers were allowed 840 characters and six images to get their message across, and that isn’t much. If you have a lot to say, that could be a problem. So perhaps a Twitter presentation is more like a poster presentation of sorts. The format is restrictive, but I think that could be a good thing. Presenters are forced to be clear and concise, and make use of meaningful graphics: all good things in my opinion. Also, not everybody is made for public speaking so this kind of venue might appeal to people who are intimidated by speaking to a crowd, or who aren’t particularly skilled presenters. And unlike webinars where if an audience member misses part of the presentation to deal with e-mail or other distractions, it’s easy to catch up if something draws their attention away. The tweets are also easy to retweet if they resonate with the audience, so we can hopefully say goodbye to ultra-vague tweets referencing conference presentations.

Would you be up for this kind of conference? I would be very disappointed if I never went to a real, in-person conference again, but I’m intrigued by the idea of having a conference via Twitter. One thing I do wonder about is what it would be like organizing such an event. You wouldn’t have to book a venue and order coffee, but you would still have to have a process in place for putting together the program and organizing the presenters. Would it be just as much work? More? Less? Perhaps this is something the EBLIP community might consider testing out for the “off” years between the international EBLIP conferences. I don’t know about you, but I’d participate.

Christine_tweet
Example of a tweet from the World Seabird Twitter Conference (#WSTC2), April 2016 https://twitter.com/Nina_OHanlon/status/720567956934148096

This article gives the views of the author and not necessarily the views of St. Michael’s Hospital, the Centre for Evidence Based Library and Information Practice, or the University Library, University of Saskatchewan.

Remaining Relevant with Reconfigured Print Collections

by Jill Crawley-Low
Science Library, University of Saskatchewan

Lately I have found collection management, specifically weeding of a collection and identification of a core collection, to be a challenging exercise. The University of Saskatchewan’s (U of S) Engineering Library’s print collection built over many years is being reconfigured, including downsizing, to open up space for infrastructure that supports individual and group learning and research. A reconfigured collection remains in the branch while the majority of the collection that has not been dispersed is housed in remote compact storage. Although materials can be requested from storage by placing holds in the catalogue, in person browsing will be limited to the branch collection. A combination of computer-generated data and human-generated subject knowledge was applied to determine which items to keep and which to discard. After all this effort, the question posed is: will the reconfigured collection serve the needs of those who use it?

Those who use it are U of S students and faculty who work in a fast-paced and changeable institutional environment that is also shared by the library. Teachers, researchers, and learners expect easy access to resources and assistance on demand. In response, the University Library offers a variety of service options. The library profession lends support to all types of libraries by envisioning the ideal future and developing creative ways to provide perpetual access to e-resources, share print resources cooperatively, and address preservation issues for all types of resources.

Academic collections consist of a small proportion that you can see on the shelves. The remainder of the iceberg that is underwater is accessible online, available freely or at cost. E-resources demand management by library workers with specialized knowledge and skills: the results of mismanagement are acutely visible. Ongoing access to electronic resources requires significant labour and financial resources. Print collections do not demand management; in fact, they invite neglect. Weeding projects typically occur when there is a pressing need, such as space reduction or collection relocation, and not as an ongoing process when there is time to consult with those who use the materials and make solid decisions that will still apply some years hence.

These thoughts occurred as I stood in the stacks of that carefully acquired engineering collection and saw the well-used books side by side with those that were last opened when they were processed and shelved for the first time. I wondered about the combination of logic applied to data and the less logical subject knowledge that would result in the most relevant reconfigured print collection remaining on the shelves. I also wondered how important the unseen online component of the collection is to our users’ information needs, and the value of taking time as well as employing the subject and technical expertise of library workers to dismantle a collection that has been built over the years.

The question asked at the outset of this post, “Will the collection serve the needs of those who use it?” is going to be answered in the months to come by the usefulness of the core, or reconfigured, collection. We will monitor the use of the collection going forward, and, if some of our decisions have run counter to our service mission to assist clients on their research and learning journeys, we will take their advice and reconfigure the reconfigured collection.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Gathering Evidence by Asking Library Users about Memorable Experiences

by Kathleen Reed
Assessment and Data Librarian, Vancouver Island University

For this week’s blog, I thought I’d share a specific question to ask library users that’s proving itself highly useful, but that I haven’t seen used much before in library assessment:

“Tell me about a memorable time in the library.”

Working with colleagues Cameron Hoffman-McGaw and Meg Ecclestone, I first used this question during the in-person interview phase of an on-going study on information literacy (IL) practices in academic library spaces. In response, participants gave detailed accounts of studying with friends, moments that increased or decreased their stress levels, and insight into the 24/7 Learning Commons environment – a world that librarians at my place of work see very infrequently, as the library proper is closed after 10pm. The main theme of answers was the importance of supportive social networks that form and are maintained in the library.

The question was so successful in the qualitative phase of our IL study, I was curious how it might translate to another project – an upcoming major library survey that was to be sent to all campus library users in March, 2016. Here’s the text of the survey question that we used:

“Tell us about a memorable time in the library. It might be something that you were involved in, or that you witnessed. It might be a positive or negative experience.”

It wasn’t a required question; people were free to skip it. But 47% (404/851) of survey takers answered the question, and the answers ranged in length from a sentence to several paragraphs. While analysis isn’t complete on the data generated from this question, some obvious themes jump out. Library users wrote about how both library services and spaces help or cause anxiety and stress, the importance of social connections and accompanying support received in our spaces, the role of the physical environment, and the value placed on the library as a space where diverse people can be encountered, among many other topics.

To what end are we using data from this question? First, we’re doing the usual analysis – looking at the negative experiences and emotions users expressed and evaluating whether changes need to be made, policies created, etc. Second, the question helped surface some of the intangible benefits of the library, which we hadn’t spent a lot of time considering (emotional support networks, the library’s importance as a central place on campus where diverse groups interact). Now librarians are able to articulate a wider range of these benefits – backed up with evidence in the form of answers to the “memorable time” question – which helps when advocating for the library on campus, and connecting to key points in our Academic Plan document.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

C-EBLIP Research Network: You’re Invited

by Virginia Wilson
Director, Centre for Evidence Based Library and Information Practice

For almost 3 years, the Centre for Evidence Based Library and Information Practice (C-EBLIP) has been supporting librarians at the University of Saskatchewan (U of S) as researchers and promoting evidence based library and information practice (EBLIP). This spring, we launched the C-EBLIP Research Network, an affiliation of institutions committed to librarians as researchers and/or interested in evidence based practice. The Network is conceived of as a supportive intellectual space supplemented by concrete activities. A 2-year pilot, granting institutional membership to the C-EBLIP Research Network, is kicking things off on a national and international level.

When I look at what we’ve achieved internally here at the U of S by getting librarians together for such things as the C-EBLIP Journal Club, Writing Circle, Code Club, the C-EBLIP Fall Symposium, and even this blog, I can’t help but wonder what we might achieve if we extend the participation, the collaboration, and the sharing. And so, the C-EBLIP Research Network is designed to facilitate all of those things within a global context (we go big or we go home).

An institutional membership in the C-EBLIP Research Network is primarily for the benefit of librarians within that institutions who are actively engaged in research and/or evidence based library and information practice. Becoming an institutional affiliate member of and signing a Memorandum of Understanding (MOU) with the C-EBLIP Research Network demonstrates that the larger institution supports the librarians’ growth in these areas. Institutions can be multi-sectoral (i.e. universities, public libraries, schools, special libraries, research groups). A $250CAD yearly membership fee will be reinvested back into C-EBLIP Research Network programming, i.e. webinars, research grant, etc. Librarian contacts from each institution will act in an advisory capacity to start with. The Network as it stands now is essentially a scaffold. Librarians at member institutions will have a chance to shape the Network in meaningful ways.

There are all kinds of networks out there: business networks, computer networks, telecommunication networks, television networks, even our nervous system is a network. One thing they all have in common is information exchange. The different nodes are all linked together to facilitate the movement and sharing of information. Just look at how many configurations there are!

NetworksBy NetworkTopologies.png: Maksimderivative work: Malyszkz (talk) – NetworkTopologies.png, Public Domain, https://commons.wikimedia.org/w/index.php?curid=15006915

So why a network? Why not another term such as association, partnership, alliance, consortium, or syndicate? Well, the last sounds a bit too much like we’d be up to no good. And the rest didn’t seem to click. And why am I so fixated on these diagrams (I really am)? Novick and Hurley state that “a large body of research has shown that schematic diagrams […] are powerful tools for thinking”; however, “it is important to note that superior performance is only obtained when the display format and the structure of the environment are consistent” (2001, p. 160). The idea of a network, to me, speaks to a flat structure with no institution above the other. We are peers, practicing librarians involved in the research enterprise. And yes, there will be some librarians with more experience, or more experience in certain areas, but that’s what makes the network a beautiful idea. In terms of conducting research as practicing librarians and incorporating EBLIP into our daily work, getting information from a variety of sources and sharing information in turn can assist us in many different ways. In their research, Novick and Hurley studied three schematic diagrams: the matrix, the network, and the hierarchy. Their descriptions of the network diagram are what we envision for the C-EBLIP Research Network:

• Any node (in our case, institution) may be linked to any other node (i.e. there are no constraints (p. 163).
• All of the nodes have identical status (i.e. are indistinguishable except by name) (p. 164).
• The links between nodes may be associative (p. 164).
• Any number of lines can enter and leave each node. Thus both one-to-many and many-to-one (i.e., many-to-many) relations can be represented simultaneously (p. 165).

(The above are just a few of the properties of networks, but they are the properties that speak the loudest to C-EBLIP Research Network configuration.)

noun_21272_ccSo far, the C-EBLIP Research Network is alive and well and host to several institutional members including members from Canada and the UK. A list will be coming soon and our numbers are growing (hello, Australia!). If you are interested in joining the C-EBLIP Research Network or would like to know more, please do not hesitate to be in touch with me: virginia.wilson@usask.ca

References
Novick, L.R. and Hurley, S.M. 2001. To Matrix, Network, or Hierarchy: That Is the Question. Cognitive Psychology, 42, p. 158–216 doi:10.1006/cogp.2000.0746

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.