Paying for Publishing: A Reflection on One Approach to Opening Up Hybrid Journals

by Crystal Hampson
Services to Libraries, University of Saskatchewan

Sometimes I like to think “what if?” Lately, I’ve been thinking about a particular “what if” to do with open access, how it affects our institutions’ researchers as authors, and how could the publishing system make OA better (easier, more practical) for them. Basically, I don’t want researchers to have to put their time into administrative work like negotiating author rights, keeping track of embargo periods and article versions for each article for deposit into an IR, finding funding to pay OA article processing charges, keeping track of differing funding agency mandates, etc. etc. I want researchers to put their time into research. I also want them to be able to publish in whatever journal is best for them in terms of audience and timeliness. Researchers want that too (Solomon and Björk, Nariani and Fernandez). I also don’t want to see institutions deal with sorting out all these details for every individual article one-by-one. There are too many articles, too many variations, and ultimately too much administrative process. Keeping track of all the many varied results is impractical.

I’ve therefore been musing about “what if” we had a different model to cover OA publishing charges for currently hybrid journals, something other than a model that used details to calculate an offset to subscription cost, or some type of discount to OA APCs that still have to be paid, but a model that includes all author charges (why not include all types, while we’re at it: OA fees, page fees, etc.) so that our researchers can just publish articles OA, with no charge or administrative process for them and minimal process altogether.

In the course of my reading, I recently came across Jan Velterop’s notion of the “New Big Deal.” Velterop is the co-creator of the Big Deal model for selling journal packages. He theorizes a national approach to purchasing not only toll content but also what is essentially gold OA publishing services. Velterop notes that an individual library does not have enough leverage to negotiate such a deal well (and I would add that an individual researcher has even less leverage for any negotiation); such negotiation needs to be at a national level. I would argue that “open” is open to the world, so not only a national but internationally coordinated approach will ultimately be necessary, not necessarily one global license, but national licenses that amount to global access. Though Velterop discussed this idea in 2012, it is not in place today. I like the fundamental simplicity of this idea though, but I realize it is not simple to enact.

Would this approach save money? I recognize that publishers provide value to the scholarly communication system and I don’t object to a reasonable margin of profit for their services. It seems to me that trying to save, or make, a lot of money through the switch to OA is just holding up “open.” What if we made it open first, at current price and distribution among participants (institutions, journals and publisher)? What if then multiple publishers could ingest the open content and then truly compete, without monopoly over content, and costs could become lower over time through competition and reduced complexity? What if we started with current contribution levels and contributing institutions negotiated over time a fair distribution of costs among themselves?

I admit I usually see the good elements of a “what if” idea at first. The flaws appear to me later, like where such a model leaves independent OA journals. And certainly, “what if” only goes so far until we hit the political and business realities. On the other hand, a completely new model with too many unknowns becomes something that we can’t realistically, practically, and quickly implement, and further holds back the transition to open. Certainly models involving myriad micropayments and varied author rights terms are also not viable on a large scale. So the idea to take a model that presents less of an unknown, that has less financial uncertainty for the parties involved, and develop it from there has a certain appeal.

Nariani, Rajiv, and Leila Fernandez. “Open Access Publishing: What Authors Want.” College & Research Libraries 73.2 (2012): 182-95. HighWire Press. Web. 10 Apr. 2013.

Solomon, David J., and Bo-Christer Björk. “Publication Fees in Open Access Publishing: Sources of Funding and Factors Influencing Choice of Journal.” Journal of the American Society for Information Science and Technology 63.1 (2012): 98-107. Wiley Online Library. Web. 24 Jul. 2013.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

A Style Manual for the Rest of Us

by Christine Neilson
Information Specialist, St. Michael’s Hospital
Toronto, Ontario

I’ve decided I like writing. But the hard part about writing is making sure that it’s done well, and I’ve read enough library literature that was not well written that I get concerned about my work falling into that category, too. Before Christmas some colleagues told me about Steven Pinker, a cognitive psychologist/linguist, and his recent book The Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century. They told me that Pinker’s books are very good and even entertaining. I thought, “The man writes about writing – how entertaining could it be?”, but I was curious. When I flipped through the book, I noticed that it included several cartoons: I took this to be a good sign. In the prologue Pinker wrote “By replacing dogma about usage with reason and evidence, I hope not just to avoid giving ham-fisted advice but to make the advice that I do give easier to remember than a list of dos and don’ts.” (p 6). If that doesn’t speak to an EBLIPer, I don’t know what does. I was sold.

It turns out the book was indeed entertaining. More importantly, it was easy to read and helped me to pinpoint a few areas that I need to work on. When all is said and done, I took two things away from Pinker’s book. First: there are rules to follow including, but not limited to, grammar and punctuation, but they are not an end in themselves. They are tools to get you closer to the goal that any writer should have in mind: composing clear prose that engages a reader in way that makes the topic easy to understand. In fact some of the rules we were taught in school are incorrect and the application of some others includes room for the writer’s discretion, and learning this made me feel a better about my writing (maybe it’s not so bad after all!). But what I liked best about Pinker’s book was the use of concrete examples of good and bad writing, and how the bad writing might be improved. It reminded me of the reality TV program “What Not to Wear”, where the hosts set out to improve participants’ wardrobes by showing them not only which elements work and which don’t, but also by giving the reasoning behind the advice so participants can continue to improve their style after the show is over. Pinker’s examples were drawn from a variety of sources, from academic papers to advice columns, and they illustrate that good (or bad!) writing is not limited to a specific area.

Pinker’s book also drove home for me that writing is an art form. There are rules and techniques to learn, but just like being able to follow a recipe doesn’t make you a master chef, knowing the rules does not necessarily make you a great writer. Any art form requires creativity, time, and effort. You have to develop a feel for what you’re doing that comes from experience: learning when to follow the rules and when to throw them away, and learning from others’ example. This may not be very encouraging for those of us who are not naturally inclined to be great authors and want a quick fix, but it shouldn’t come as a surprise to anyone who has ever tried to become proficient at anything, whether it’s writing, karate, mathematics, or Ukrainian dancing.

So how can we move our writing along the spectrum of “bad” to “good”? By practicing and reflecting on the good writing we come across. Reading Pinker’s book can’t hurt either. In fact, I believe I’ll read it again.

This article gives the views of the author and not necessarily the views of St. Michael’s Hospital, the Centre for Evidence Based Library and Information Practice, or the University Library, University of Saskatchewan.

Can I get a copy of your slides?: Sharing Conference Content Accurately and Reliably

by Selinda Berg
Schulich School of Medicine – Windsor Program
Leddy Library, University of Windsor

Librarians rely heavily on conferences as venues in which to share ideas, innovations, developments, and scholarly research. While conferences offer great opportunities to share information, it can sometimes be challenging when audience and community members want to make use of, build on, or even delve deeper into the content presented. Is there a way we can improve upon the ways that the information shared at conferences is disseminated and applied to our practice?

To the typical conference presenter, the conference process has gotten rather routine: Submit a 250-300 word abstract of the planned work (sometimes 4, 5, 8 months in advance); receive acceptance notice (hopefully); continue to work out the ideas presented in the abstract; and finally present the paper before both new and familiar colleagues. This is often followed by another predictable occurrence: Two or three days following the conference an email is received from an audience member requesting the slides from the presentation (I think now it is also becoming increasingly common for conference organizers to be burdened with trying to retrieve slides from presenters to share with delegates and on public websites.) For me, the request for my slides is both exhilarating (“Wow! The content resonated enough for someone to follow-up!”), and unsettling (“My slides? Oh no!”). This unsettledness does not evolve from an apprehension to share with others. My concern is that I am not sure how clearly my ideas are articulated or how accurately my results are presented through my slides.

Like many of us, I have embraced commonly accepted guidelines on effective PowerPoint slides:
• Minimal text: key words used only as a means emphasize and highlight points to the audience
• An image: A pleasing aesthetic which complements and re-enforces the content presented
• A quotation: A passage that is critical to the presentation but may not even align with my ideas, but rather be used as a point of reference to counter
• Data: Tables of data for which I provide robust explanation

Because my slides in no way provide or capture the complexity of ideas that I have presented, I worry about providing my slides without my interpretation of what is on those slides. People have taken steps to share the wholeness of their ideas by posting conference scripts up on research blogs and other open sites. I really like and respect this movement, however, it is not my general practice to write the kind of script that would be appropriate to share in such a way.

The reality is that the far-reaching use of PowerPoint has long been questioned and criticized. While I don’t agree with Tufte, one of the most often cited critics of PowerPoint, who compared the presentation software to a drug that is ‘‘making us stupid, degrading the quality and credibility of our communication’’ and ‘‘turning us into bores’’ (2003, p. 24), I do agree with Doumont (2005) who provides a counter argument suggesting that PowerPoint can indeed be valuable, but who also emphasizes that PowerPoint is first and foremost a companion for oral presentations, which typically have a different purpose than written documents. Slides are designed to be “viewed while the presenter is speaking, not read in silence like written documents.” I very much see PowerPoint as a companion for the audience while I am speaking, not as a standalone document to be read.

So in the end, I am left wondering if there is value in exploring a more formal and consistent process for sharing conference content so it is more trustworthy and usable. Thinking about this challenge, I have thrown around a few ideas and come up with one possible solution. But I still wonder what other ideas are out there.

In addition to the 250-300 word abstract 4-7 months prior submitted for accepted, perhaps also requesting/requiring that presenters provide a 500-700 word extended abstract (approx.. 1 page single spaced) either immediately prior to or following the conference to be posted as a type of modified conference proceedings that is common in other fields. Some library and LIS conferences including but not limited to EBLIP, ALISE, CAIS have embraced a longer abstract, but I am most commonly asked to provide 250 words which becomes the document of record for my presentation. The collection of the extended abstract during or following the conference event allows the presenter to ensure the ideas captured are in their final form, and also allows the information to be shared accurately and in the tone and manner that the researcher/presenter intended. Libraries’ increasing role in the management of institutional repository software, which often includes conference modules, makes managing this initiative both simple and accessible for library conferences. It also aligns with our values to make information and research more open and accessible.

For me, a 600-word extended abstract seems much more reliable and robust than a set of visually pleasing slides or a brief abstract created months before. I think such a gesture would help us build on the important ideas, research, and evidence presented at conferences, and of course allow for better citation of these ideas. Here’s a concrete example of this need: I was asked by an article reviewer to cite a conference presentation directly related my topic, that I had not attended, but the slides were available online. I so wanted to acknowledge the ideas but felt very uncomfortable citing something that I had such slight knowledge of and only had a visual glimpse into. I would have felt much more comfortable had I been able to view or access a conference record that was composed as a written—not visual—document, created with the intention of sharing the ideas and interpretations as fully and clearly as possible.

I have been to so many incredible conferences where the presentations have been innovative, robust, and valuable; I worry that the ideas of these scholars are not as accessible, usable, and reliable as they deserve to be.

How do you think we can ensure the valuable knowledge presented at our professional conferences can be shared accurately and reliably?

Doumont, J. L. (2005). The cognitive style of PowerPoint: Slides are not all evil. Technical communication, 52(1), 64-70.
Tufte, E. R. (2003). The cognitive style of PowerPoint. Cheshire, CT: Graphics Press.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Library Assessment Data Management ≠ Post-It Notes

by Kathleen Reed
Assessment and Data Librarian, Vancouver Island University

Over the last few years, data librarians have become increasingly focused on data management planning as major funders and journals insist researchers have data management plans (DMPs) in place. A DMP is a document that outlines how data will be taken care of during its life cycle. Lately I’ve spent a lot of time thinking about how my data service portfolio dovetails nicely with library assessment activities. A lot of discussion in the library evidence-based practice community is about obtaining and analyzing data and stats, with little emphasis about stewardship of said information. Recently, I gave a talk at an assessment workshop put on by the Council of Post-Secondary Library Directors where I reflected on data management for assessment activities. This post introduces some of the steps in working out a DMP. Please note that while DMPs usually refer to data, not statistics, in my world the ‘D’ stands for both.

Step 1: Take an Inventory
You can’t manage what you don’t know about! Spend some time identifying all your sources of evidence and what format they’re in. I like to group by themes – reference, instruction, e-resources, etc. While you’re doing this, it’s also helpful to reflect on whether the data you’re collecting is meeting your needs. Collecting something that you don’t need? Ditch it. Not getting the evidence you need? Figure out how to collect it. Also think about the format in which the data are coming in. Are you downloading a PDF that’s making your life miserable when what you need is a .csv file? See if that option is available.

Step 2: The ‘Hit by the Bus’ Rule (aka Documentation)
If you’re going to archive assessment data, you need to give the data some context. I like to think of this as the ‘hit by a bus’ rule. If a bus hits me tomorrow, will one of my colleagues be able to step into my job and carry on with minimal problems? What documentation is required by someone else to understand, collect, and use your data? Every single year when I’m working on stats for external bodies, I have to pull out my notes and see how I calculated various numbers in previous years. This is documentation that should be stored in a safe, yet accessible place.

7004634470_2eebf7e336_z
Post-its don’t count as ‘documentation.’ Neither does a random sheet of paper in a towering stack on your desk. Photo by Wade Morgan

Step 3: Storage and Backup
You’ve figured out what evidence and accompanying documentation you need to archive. Now what are you actually going to do with it? For routine data and stats, around my institution we use a shared drive. Within the shared drive I’ve built a simple website that sits on top of all the individuals data files; instead of having to scroll through hundreds of file names, users can just click links that are nicely divided by themes, years, vendors, etc. IT backs up the shared drive, as do I on an external hard drive. If your institution has access to Dataverse hosted on a Canadian server, this is a good option.

Step 4: Preservation
For key documents, you might consider archiving them in a larger university archive. LibQUAL+ results, documentation, and raw data are currently being archived via my institution’s instance of DSpace.

Step 5: Sharing
I always felt like a hypocrite, imploring researchers to make their data open when I squirreled library data away in closed-access shared drives. Starting with LibQUAL+, this past year I’ve tried to make as much library data open as possible. This wasn’t just uploading the files to our DSpace, but also involved anonymizing data to ensure no one was identifiable.

If you’re going to share data with your university community and/or the general public, keep in mind that you’ll need to identify this right away when you’re designing your evidence-collection strategies. For example, if you’re doing a survey, participants need to be informed that their responses will be made available. Ethics boards will also want to know if you’re doing research with humans (monkeys too, but if you’ve got monkeys in your library you’ve got bigger problems than figuring out a DMP…)

Perhaps the most important aspect of sharing is setting context around stats and data that go into the wild. If you’re going to post information, make sure there’s a story around it to explain what viewers are seeing. For example, there’s a very good reason that most institutions score below expectations in the “Information Control” category on LibQUAL+ – there isn’t a library search tool that’s as good as Google, which is what our users expect. Adding some context that explains that the poor scores are part of a wider trend in libraries and why this trend is happening will help people understand it’s not that your library is necessarily doing a bad job compared to other libraries.

Want more info on data management planning? Here are a few good resources:

DMP Builder by the California Digital Library

VIU’s Data Management Guide

Research Data Management @ UBC

What are your thoughts about library assessment data and DMPs? Does your institution have a DMP for assessment data? If so, how does your institution keep assessment data and stats safe? Let’s keep the conversation going below in the comments, or contact me at kathleen.reed@viu.ca or on Twitter @kathleenreed

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

I can’t make bricks without clay: A Sherlock Holmesian approach to Research and EBLIP

by Marjorie Mitchell
Librarian, Learning and Research Services
UBC Okanagan Library

Sherlock Holmes was invoked in the inaugural post of Brain-Work, the C-EBLIP blog, and I would like to revisit Conan-Doyle for the inspiration of this post. So, being a curious librarian, I searched “Sherlock Holmes and research” and came across the heading “What user researchers can learn from Sherlock Holmes.” The author, Dr. Philip Hodgson, took quotes from a variety of the Sherlock Holmes novels and laid out a five-step research process for investigators working in the user-experience field. As I read Dr. Hodgson’s article, it struck me there was a strong kinship between the user-experience community and the library community that extends beyond the electronic world. I also believe Hodgson’s steps provide a reasonable starting point for novice evidence-based library and information practice (EBLIP) researchers to follow.

Step 1 – According to Hodgson, the first step is understanding the problem, or formulating the question. I would adapt it even further and suggest being curious is the very first step. If we’re not curious, we can’t identify what we want to know, or the question we hope to answer. Perhaps a mindset of being open to the discomfort of not knowing motivates researchers to embark on the adventure of inquiry. Once my curiosity has been aroused, I move to formulating a question. Personally, my question remains somewhat fluid as I begin my research because there are times I really don’t have enough information to formulate an answerable question at the beginning of my research.

Step 2 – Collecting the facts, or as I prefer to call it, gathering the evidence, follows. This is one of the juicy, tingly, exciting parts of research. Once I have a question, I think about what information will answer the question. Sometimes simply reading the literature will give me enough of an answer. At other times, I have to go further. Even just thinking about methods can send a shiver of excitement through me. Administering surveys, or conducting interviews, or running the reports from the ILS in hopes it will illuminate some arcane or novel library user behavior are all ways of collecting juicy evidence; it is exciting to see initial results come in and begin to decipher what the results are actually saying. Sometimes the results are too skimpy, or inconclusive, and the evidence gathering net needs to be cast again in a different spot for better results.

Step 3 – Hodgson suggests the next step should be developing a hypothesis to explain the facts you have gathered. This is one step, as much or more than the others, that requires our brain-work. Here we bring our former knowledge to bear on the results and how they relate to the question. It is a time for acute critical thinking as we take the results of our evidence gathering and determine their meaning(s). Several possible meaning may arise at this stage. Hodgson implies it is important to remain open to the multiple meanings and work to understand the evidence gathered in preparation for the next step.

Step 4 – In this step, Hodgson is especially Holmsian. He suggests it is now time to eliminate the weaker hypotheses in order to come closer to a solution. The focus on user experience research is especially strong here. Specific, actionable solutions are being sought to the question identified in the first step. Here he recommends evaluating your evidence to eliminate the weaker evidence in favor of the stronger. He is also cognizant of the need to have solutions that will be affordable and able to be implemented in a given situation. While the whole of this step may not apply to all research, much of it will.

Step 5 – Implementation or action now have their turn. Again, Hodgson is speaking directly to the user experience audience here. However, implementation or action based on research may lead to a decision to not implement or act upon a suggestion. The strength lies in the process around reaching this decision. Questions were asked; evidence was gathered; analysis took place; judgment was applied. As Hodgson pointed out, this is a much better process than proceeding by intuition.

Finally, I would like to add a Step 6 to Hodgson’s list. In order to really know whether the action implemented had the desired effect, or an unintended effect, it is important to evaluate the results of the action or change. In the effort to publish results of research, timeliness is an issue. It is not often possible to have the luxury of the amount of time it would take to be able to measure an effect. However, even in those cases, I am interested in what type of evaluation might take place at a later date. Sometimes researchers will address their future evaluation plans, sometimes they don’t. Even if they aren’t being shared, I hope they are being considered.

This is a simple and elegant plan for research. In its simplicity, it glosses over many of the messy complications that arise when conducting research. That said, I hope this post encourages librarians to follow their curiosity down the path of research.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Planning for a First-Time Sabbatical

by DeDe Dawson
Science Library, University of Saskatchewan

Happy New Year, Brain-Work readers!

2015 is shaping up to be an exciting year for me professionally. This is my sixth year as a tenure-track academic librarian at the University of Saskatchewan and, as I type these words, my tenure file is making its way through the various campus committees. If all goes well (i.e. my tenure is awarded!), I will be eligible for my first sabbatical as of July 1st, 2015.

Of course, administerial planning for such things has to happen many months in advance, so I was composing my sabbatical application back in September 2014. As it happens, I was working on the application the same day that Kristin Hoffmann’s Brain-Work blog post, Taking Time for Research, came out. The post described her recent sabbatical experiences and highly recommended the experience to all researching librarians lucky enough to have the opportunity. Well, the timing certainly felt auspicious!

I am happy to report that my sabbatical application has been approved and is also now working its way through the university administration. So, with the same caveat as above (if all goes well!), I will be a first-time sabbaticant for the 2015/16 year. Thinking about my upcoming leave I wondered what sage advice the Google oracle could provide. I have collected this short list of tips to share:

Sabbatical First Timer Tips

1. “The time to plan your sabbatical is well before it begins, up to a year or more prior in fact.” [Link]
This key bit of wisdom came from one of the first articles I stumbled across. Thanks to my application, I already have a broad outline of my sabbatical research project, and a smaller related project. However, I think I’ll also try to get some preliminary work underway this spring so that I waste no time and hit the ground running as of July 1st.

2. “The best tip a friend … got, while unemployed, was to get out of bed at a reasonably early hour, shower, shave and dress as though he had somewhere to go. He found this made him much more interested in using his time productively… Have a schedule, work every day, and feel great about your precious, precious sabbatical. You’ve earned it.” [Link]
This was #10 from a top ten list of sabbatical tips for junior faculty. Many of the other tips were useful too, but this one really resonated with me. I am not planning any extended travel and am hoping to get a workspace on campus during my sabbatical… but I know it is likely that I’ll also often work from home – because I can! And also, on the dark and frigid days of a Saskatchewan winter I know I will not want to leave the house if I don’t have to. So… how to stay motivated and productive from home? The thought of staying in my pajamas all day is appealing, but getting dressed and keeping to a schedule seems like a good strategy to maintain focus.

3. “Keep a daily research journal.”
This tip comes not from the Google oracle, but from an equally wise colleague of mine: Vicky Duncan. A research journal is useful for keeping track of your evolving research projects, insights and epiphanies, and any methodology decisions you make along the way…and why you made them! Seems to be a good idea for researchers in general (not just sabbaticants). However, I think a journal might be especially useful next year since I intend to spend a good portion of my sabbatical time reading deeply and simply thinking about what I’ve read. Lately, I often find myself just skimming research articles in a few spare minutes I have here and there during the day. So I am eagerly anticipating having the time and mental space to thoroughly immerse myself in scholarly literature. I can already anticipate the result of this though: an extended time period where I have no obvious products. Keeping a daily research journal of my reading, and the ideas that emerge, will be a nice reassurance to myself that I am making progress and have accomplished something.

4. “Don’t be under the illusion that you’ll get tons done. Be realistic and don’t beat yourself up.”
Actually this isn’t a direct quote, but my own summation from multiple sources! It seems many people have unrealistic expectations of what they can accomplish in this time frame, and also forget that one of the purposes of sabbaticals is to recharge your batteries. And that brings me to my final tip…

5. “My plea to my striving colleagues is to be true to the origins of the word. Don’t do nothing—but don’t focus on your usual activities either. Do not till the same soil; dare to do things differently for a year. You will be doing exactly what you are supposed to be doing— honoring your profession and the confidence placed in you— when you explore new areas, pursue projects that might fail, expand your mind with art or music or great literature, and generally upset your routine.” [Link]
A good tip to end on. A burnt-out academic cannot contribute much to their institution or discipline. I must admit that the first six years on the tenure-track, in a new profession, have left me just a little *tired*. This sabbatical leave is coming at an ideal time. A time when I am passing a significant hurdle in my professional life (if all goes well!), a time when I’m growing in confidence as a researcher and am eager to really concentrate on a substantial project, and a time when I feel I need a change of pace… to step back and take a breath.

Well, these are some of the best tips I’ve collected so far, I’m sure there is a lot more advice out there. If you have a nugget of wisdom to contribute to the list please leave a comment below!

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Open Data and EBLIP – How open are we?

by Pam Ryan
Director, Collections & Technology at Edmonton Public Library

When we talk about evidence based library and information practice (EBLIP), we’re most often talking about research – conducting our own or finding and integrating research and best-available evidence into our practice. Part of this continuum should also include working towards making our own library service and operations data openly available for analysis and re-use.

This isn’t out of line with current library initiatives. Academic libraries have long supported the open access movement and for many, services around managing institutional research data are a current priority. Influenced by open government developments in their municipalities, public libraries are increasingly working to increase open data literacy through programming, encouraging citizens to think critically about government services and learn how to unlock the value of open data.

Why does open data matter for libraries? It aligns with our core values of access to information, sharing, openness, transparency, accountability, and stewardship. It supports our missions to provide information and data literacy, it can provide others with information to help us in our advocacy and value of libraries initiatives, and maybe most importantly, it can fuel research and initiatives we ourselves haven’t yet thought of.

My own place of work has a current business plan goal to: Develop an open data policy that includes how we will use and share our own data; participate in Edmonton’s Open Data community and support data literacy initiatives. We’ve begun to make progress in these areas by developing a statement on open data and collaborating with the City of Edmonton on public programs:

• Edmonton Public Library’s Statement on Open Data:
http://www.epl.ca/opendata

• EPL’s 2014 Open Data Day program:
http://www.epl.ca/odd2014

Has your library started a discussion about what your library’s approach to open data will be?

Further Reading:

Thompson, B, The open library and its enemies, Insights, 2014, 27(3), 229–232; DOI: http://dx.doi.org/10.1629/2048-7754.172

Data is Law / Civic Innovations: The Future is Open http://civic.io/2014/12/28/data-is-law/

pryan@epl.ca / Twitter: @pamryan

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

C-EBLIP Wishes You a Happy Holiday Season

by Virginia Wilson
Director, Centre for Evidence Based Library and Information Practice

Since the first post on July 15, 2014, Brain-Work, the blog of the Centre for Evidence Based Library and Information Practice (C-EBLIP), has published 26 posts from a variety of librarians from the University of Saskatchewan and from across Canada. I’d like to extend a big THANK YOU to all the Brain-Work contributors. Your posts have been thought-provoking, interesting, and diverse, and we can look forward to more of the same in the New Year.

Brain-Work will be on a bit of hiatus for the holiday season, resuming publication on January 13, 2015. In the meantime, please have a lovely December break. C-EBLIP wishes you a very Happy New Year and all the best in 2015.

SnowManHappy Holidays!

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

C-EBLIP announces the 2015/16 Researcher in Residence

by Virginia Wilson
Director, Centre of Evidence Based Library and Information Practice

Selinda Berg, an accomplished librarian and researcher from the University of Windsor, is the University Library’s 2015/16 Researcher in Residence. Based in the Centre for Evidence Based Library and Information Practice (C-EBLIP), Selinda’s term will begin in July 2015.

The University Library recognizes the critical value and importance of research as a key element of professional practice and actively supports librarians in their role as researchers. As part of our efforts to develop a research culture and the research capabilities of librarians, the University Library established the Researcher-in-Residence Program designed to help enrich the research culture at the University Library. C-EBLIP supports librarians as researchers and promotes evidence based library and information practice.

Of the Researcher in Residence appointment, Selinda said, “I am very excited to embark on my sabbatical and the many possibilities that lie with working with the U of S librarians and the Centre of Evidence Based Library and Information Practice. While the sabbatical is very exciting, I really do feel like I won the lottery with this opportunity.”

Selinda Berg is a librarian at the Schulich School of Medicine and Dentistry–Windsor Program and the Leddy Library at the University of Windsor. Concurrently, she is completing her PhD. in the Faculty of Information and Media Studies at Western University. Research has played a central role in her educational and professional career. At the core of her conception of the research environment for librarians lies the assumption that librarians themselves must take a lead role in developing a unique research culture that complements the environment and needs of practicing librarians. The initiatives that she has led locally and nationally (including the CARL Librarians Research Institute and Westerns Librarian and Archivist Research Support Network) reflect this thinking. Selinda’s professional service and research focuses on the development of professional identify in academic librarians, especially in relation to their identities as researchers.

C-EBLIP and the University Library are pleased to welcome Selinda, an active and engaged librarian researcher with a keen interest in building a Canadian librarian culture of research. For more information, visit http://library.usask.ca/ceblip/activities/researcher-in-residence-program.php or contact Virginia Wilson, C-EBLIP Director, virginia.wilson@usask.ca

Conducting a Statistics Survey on Visible Minority Librarians in Canada

by Maha Kumaran
Leslie and Irene Dubé Health Sciences Library, University of Saskatchewan

The Visible Minority Librarians of Canada (ViMLoC) Network was established with help from the Canadian Libraries Association (CLA) in December 2011. While working on a research project on leadership among minority librarians I needed to send out a survey to all Canadian minority librarians. There was no single forum or network through which I could do this. Unfortunately CLA does not, for understandable reasons, collect information on ethnic backgrounds of librarians to determine if they identify themselves as minority librarians; and unlike the American Library Association, CLA does not have various affiliates for its different minority groups. This meant sending out the survey through CLA and all the provincial library associations.

This situation prompted me to create a common forum for Canadian visible minority librarians. After consulting with CLA, I worked closely with them on creating a Network that would not only collect statistical information about minority librarians in Canada, but also serve as a common forum for this group to discuss their concerns, share ideas and success stories, have peer mentorship support, and in the future provide continuing education options tailored to this group. An email to the CLA listserv was sent in November 2011 calling for all librarians interested in this initiative to contact me and become founding members of this initiative.

11 librarians, not all visible minorities, from all over Canada became founding members of this initiative and after approval in December 2011, the Network started to function in January 2012. Heather Cai (McGill University) and I served as co-moderators of the Network for the first two years and have passed now it off to Norda Majekodumni (York University) and Kam Teo (Weyburn Public Library) for 2013-2015.

When invited for a round table conference at the Ontario Library Association Super Conference in January 2013 in Toronto, Ontario, attending conference members expressed an interest in ViMLoC undertaking two projects: gathering statistics on the number of visible minority librarians working for Canadian institutions (which was the motivation for creating this Network); and creating a mentorship program for minority librarians.

Heather and I worked on the statistics project. A short electronic questionnaire using Fluid Surveys was created with 12 questions – open ended, multiple choice, yes/no, qualitative response. After ethics approval from both institutions (for Maha and Heather), the survey was sent out through CLA, Canadian Medical Libraries and Special Libraries Association list servs. The survey was also posted on ViMLoCs listserv and website. It ran from December 9th, 2013 to January 31, 2014.

The opening question defined visible minorities as per the Canadian Employment Equity Act and asked if the participant was a minority librarian. If they answered no, they were thanked for their participation and logged out. The purpose of this survey was to gather statistical information on visible minority librarians and we needed to ensure that responses were only from minority librarians.

Of the 191 who attempted to fill out the survey, 120 completed it. The survey had many questions on ethnic backgrounds, educational background, current employment status, etc. There is rich qualitative data that is still being analyzed for future publication. In the survey responses, minority librarians have identified areas where they need help or support and have expressed gratitude for having ViMLoC as a common forum to discuss their concerns, find mentors or friends and fellow researchers to collaborate with.

Results from the survey are currently being analyzed and written as an article to be submitted to a library journal. Please stay tuned.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.