Are we designing services with an expiration date?

Kristin Lee, Research Data Librarian
Tisch Library, Tufts University

In January of this year, I started a new job as the first Research Data Librarian at the Tisch Library at Tufts University. Librarians at Tisch have been providing research data services to the Schools of Arts & Science and Engineering, so one of my first jobs was to understand the services that have already been offered and where we might expand into new areas. As is the case in many libraries, a cornerstone of the data management services was providing consultations for researchers writing data management plans (DMPs) for grant applications to the US federal funding agencies like the National Science Foundation and the National Institute of Health. ‘Perfect,’ I thought, ‘I can work with this.’

Then came what felt a bit like a data librarian existential crisis – the White House Office of Science and Technology Policy memorandum calling for expanded public access to research data, among other things, had disappeared from the White House website. While this created a good excuse to go through my LibGuides and find all of the links to that memo, it also made me question everything I had been thinking about when it came to the services I was going to offer. So, as I created page after page of notes with titles like “Research Data Services”, “Data Management Services”, “Services for Researchers”, and a lot of other permutations of those few phrases, the rest of the words refused to materialize. How would I convince researchers that managing their data was in their best interest, and if I couldn’t figure out how to do that would I still have a job?

The disappearance of the memo from the White House website has not meant that the US funding agencies have gotten rid of the DMP requirements, so my job is safe and my move from Saskatchewan to Massachusetts was not for nothing – existential crisis averted. But it still made me wonder about the longevity of services designed as a reaction to specific external forces, and to think that it might be okay to plan services that might eventually have an expiration date.

Once I stopped thinking about the service list I was putting together as being written in stone I was able to start drafting some ideas.  I was able to center the researchers (who should have been my main concern in the first place) as the focus of my work instead of relying on the threat of funding agency mandates to make them seek me out.  I could think about what we would be able to do in order to actually help people make the most of their data in both their research and teaching. I reminded myself that data skills are important to students in their academic and personal lives and are transferable whether they continue to study or decide to work outside of higher education. We can give the next generation of researchers the background and tools they need to keep pushing the open science movement forward.

Considering that the shelf life of some of our services is going to be measured in years as opposed to decades, there are clear implications for the way we share our work with each other as practitioner-researchers. If it takes 3 or more years to collect data and a year between having a paper accepted and published in a peer-reviewed journal, how timely will the research be? One of my favorite sessions at the Research Data Access and Preservation Summit (https://www.asis.org/rdap/) in Seattle this year was the Institutional Snapshots where we got a very brief picture of what is happening at a variety of institutions. From this session, I was able to identify institutions to reach out to get the gory details about what had worked for them and what hadn’t with respect to research data services.

I know that the services will change over time, so much so that they become unrecognizable in the future, but that doesn’t mean that it isn’t worth offering them now and seeing what works. Finding venues to share what we are doing, not just our successes but also our failures and the things that keep us up at night will help us get through times of uncertainty and change. What started as a “the-sky-is-falling” moment for me has let me get back to why I love being a data librarian in the first place; we can help researchers at all levels get the skills they need to solve the world’s big problems, and we are doing it as a community.


This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Library Assessment Data Management ≠ Post-It Notes

by Kathleen Reed
Assessment and Data Librarian, Vancouver Island University

Over the last few years, data librarians have become increasingly focused on data management planning as major funders and journals insist researchers have data management plans (DMPs) in place. A DMP is a document that outlines how data will be taken care of during its life cycle. Lately I’ve spent a lot of time thinking about how my data service portfolio dovetails nicely with library assessment activities. A lot of discussion in the library evidence-based practice community is about obtaining and analyzing data and stats, with little emphasis about stewardship of said information. Recently, I gave a talk at an assessment workshop put on by the Council of Post-Secondary Library Directors where I reflected on data management for assessment activities. This post introduces some of the steps in working out a DMP. Please note that while DMPs usually refer to data, not statistics, in my world the ‘D’ stands for both.

Step 1: Take an Inventory
You can’t manage what you don’t know about! Spend some time identifying all your sources of evidence and what format they’re in. I like to group by themes – reference, instruction, e-resources, etc. While you’re doing this, it’s also helpful to reflect on whether the data you’re collecting is meeting your needs. Collecting something that you don’t need? Ditch it. Not getting the evidence you need? Figure out how to collect it. Also think about the format in which the data are coming in. Are you downloading a PDF that’s making your life miserable when what you need is a .csv file? See if that option is available.

Step 2: The ‘Hit by the Bus’ Rule (aka Documentation)
If you’re going to archive assessment data, you need to give the data some context. I like to think of this as the ‘hit by a bus’ rule. If a bus hits me tomorrow, will one of my colleagues be able to step into my job and carry on with minimal problems? What documentation is required by someone else to understand, collect, and use your data? Every single year when I’m working on stats for external bodies, I have to pull out my notes and see how I calculated various numbers in previous years. This is documentation that should be stored in a safe, yet accessible place.

7004634470_2eebf7e336_z
Post-its don’t count as ‘documentation.’ Neither does a random sheet of paper in a towering stack on your desk. Photo by Wade Morgan

Step 3: Storage and Backup
You’ve figured out what evidence and accompanying documentation you need to archive. Now what are you actually going to do with it? For routine data and stats, around my institution we use a shared drive. Within the shared drive I’ve built a simple website that sits on top of all the individuals data files; instead of having to scroll through hundreds of file names, users can just click links that are nicely divided by themes, years, vendors, etc. IT backs up the shared drive, as do I on an external hard drive. If your institution has access to Dataverse hosted on a Canadian server, this is a good option.

Step 4: Preservation
For key documents, you might consider archiving them in a larger university archive. LibQUAL+ results, documentation, and raw data are currently being archived via my institution’s instance of DSpace.

Step 5: Sharing
I always felt like a hypocrite, imploring researchers to make their data open when I squirreled library data away in closed-access shared drives. Starting with LibQUAL+, this past year I’ve tried to make as much library data open as possible. This wasn’t just uploading the files to our DSpace, but also involved anonymizing data to ensure no one was identifiable.

If you’re going to share data with your university community and/or the general public, keep in mind that you’ll need to identify this right away when you’re designing your evidence-collection strategies. For example, if you’re doing a survey, participants need to be informed that their responses will be made available. Ethics boards will also want to know if you’re doing research with humans (monkeys too, but if you’ve got monkeys in your library you’ve got bigger problems than figuring out a DMP…)

Perhaps the most important aspect of sharing is setting context around stats and data that go into the wild. If you’re going to post information, make sure there’s a story around it to explain what viewers are seeing. For example, there’s a very good reason that most institutions score below expectations in the “Information Control” category on LibQUAL+ – there isn’t a library search tool that’s as good as Google, which is what our users expect. Adding some context that explains that the poor scores are part of a wider trend in libraries and why this trend is happening will help people understand it’s not that your library is necessarily doing a bad job compared to other libraries.

Want more info on data management planning? Here are a few good resources:

DMP Builder by the California Digital Library

VIU’s Data Management Guide

Research Data Management @ UBC

What are your thoughts about library assessment data and DMPs? Does your institution have a DMP for assessment data? If so, how does your institution keep assessment data and stats safe? Let’s keep the conversation going below in the comments, or contact me at kathleen.reed@viu.ca or on Twitter @kathleenreed

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.