Choosing the right metric for Facebook advertisements: a (very) small case study

by Joanna Hare, Run Run Shaw Library, City University of Hong Kong

Following on from my previous Brain Work blog post about Facebook Insights, for this post I would like to share my recent experiences of utilising paid Facebook advertising for our library’s Facebook page.

Starting in October 2016 my library has been experimenting with paid Facebook advertising. Using a modest budget we have run two broad categories of advert:

  1. Page Likes: An advert shown to our specified target audience to invite them to like our page.
  2. Boosted Posts: Promote a single post from our Page to help it reach both our existing fans and people who have not ‘Liked’ our Page.

This is the second semester in which we have utilised paid Facebook adverts, which means I have been able to learn from the first semester and make adjustments for the second. For this post, I would like to reflect on choosing the right type of metric to be able to accurately assess the performance of your advert.

In the first semester of experimenting with Facebook advertisements, we boosted a post promoting our citation management workshops (Fig 1). In my previous experience, I had found Photo posts had performed better on Facebook than Link posts, so I used the same strategy in this advert.      

Fig 1: The first advert promoting our citation management workshops, run in November 2016.

Fig 1: The first advert promoting our citation management workshops, run in November 2016.

 

The results of this post were both encouraging and disappointing. While I was pleased that 54 people ‘reacted’ to the post, only 6 people clicked the link to view the workshop registration page. This indicated to me that people liked the content of the post (perhaps baited by a cute kitten!), but they were not so engaged as to take the extra step to click the link and view the workshop registration page.

To address this, in the second semester I used an advert type specifically designed to direct people to a destination off Facebook. The major difference is the ‘Sign Up’ button at the bottom of the post (Fig 2.), which when clicked directs users to the workshops registration page.   

Fig 2: The second advert promoting our citation management workshops, run in February 2017.

Fig 2: The second advert promoting our citation management workshops, run in February 2017.

 

The performance of this advert was much more encouraging: while only 6 people ‘reacted’ to the post, 93 people clicked the link to visit the registration page – a number greatly improved from the previous semester. Ordinarily the low number of reactions to a paid advert would be disappointing, but in this case, a high-click through result is preferable to lots of Likes.

In terms of the impact on workshop registrations, we did record a slight increase in the number of registrations and attendees, but certainly not an increase of 93 people, meaning not all of those visitors to the registration page actually registered for or attended the workshop. This could be a reflection of the design of the workshop registration page or the process of registration. This brings us to the next (much larger!) phase in our assessment of our Facebook adverts – to investigate their real-world impact outside of link clicks and post likes.

While Facebook is notoriously secretive of about how their algorithms work, they do provide a huge amount of data to demonstrate how your content is performing – and paid adverts are no different. Ultimately this exercise taught me to pay attention to the metrics used to assess the performance of Facebook adverts, and to choose an advert template that addresses the goal of placing the advert. This whole exercise forms part of a broader strategy in the Library to adopt an evidence-based approach to the use of Facebook to ensure our efforts on social media are worth the time investment. Based on the increased outreach and engagement with our posts thanks to paid advertising, our Library will continue to experiment with Facebook adverts in the coming academic year.

Recommended reading for anyone wanting to explore Facebook advertising for academic libraries:

  • Chan, Christopher, “Your Mileage May Vary: Facebook Advertising Revisited,” College and Research Library News, 77:4 (2016): 190-193, accessed December 21, 2016, http://crln.acrl.org/content/77/4/190.full
  • Scott W.H. Young, Angela M. Tate, Angela M., Doralyn Rossmann and Mary Anne Hansen, “The Social Media Toll Road: The Promise and Peril of Facebook Advertising” College and Research Library News 75:8 (2014): 427-434, accessed December 21, 2016,  http://crln.acrl.org/content/75/8/427.full   

This article gives the views of the author and not necessarily the views the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

When Evidence and Emotion Collide: Rolling Out Controversial Evidence-Based Decisions

by Kathleen Reed, Assessment and Data Librarian, Instructor in the Department of Women’s Studies, and VP of the Faculty Association at Vancouver Island University

Last week’s news of Trent University Library’s Innovation Cluster was the latest in a string of controversial decisions centering on academic libraries that jumped from campus debate into the mainstream media. With so many libraries in the middle of decisions that are likely unpopular (e.g. cutting journals, weeding little-used and aging print collections), I’ve been increasingly thinking about best practices for communicating major evidence-based decisions to campus communities.

Simply having the data to support your decision isn’t enough; rolling out a controversial decision is an art form.  Luckily, I’ve had some really good teachers in the form of colleagues and bosses (shout out to Dana, Jean, Tim, and Bob).  With the disclaimer that I’ve taken no change-management training and am in no way an expert, here’s what I’ve learned in my first six years as a librarian:

Start conversations early and have them often.
Decisions do not go well when they’re dropped on people out of the blue. For example, we know that there’s a crisis in scholarly publishing – the fees and inflation vendors are charging are unsustainable and the CAD/USD conversion rate isn’t in our favour. Faculty need to hear that message now, not in a few years when a major journal package has to be cut.  If you talk to communities early, you’re able to hear concerns that may figure into future decision making, and figure out strategies to meet needs in other ways.

Don’t force an idea if you don’t have to.
One of the things my first boss told me was that sometimes you’ll have a great idea, but conditions just aren’t right for uptake.  Now 6 years into my career, I can see what he meant.  In my first year, cutting a particular database was unthinkable to a particular department, even though there was plenty of evidence that this cut should be made.  Four years later and I was able to get an agreement to cancel from the department with little issue.  Sometimes an issue has to be dealt with, but other times you can wait.  Plant a seed, walk away, and come back to it another time if you can.

Be proactive.
When Memorial University of Newfoundland (MUN) announced that thousands of journals were being reviewed for potential cost-savings and CBC reported on it, Ryerson University Library & Archives did something outstanding: they published a proactive blog article being upfront with their community. There was no beating around the bush, with the first Q&A being: “Can we anticipate similar activities at Ryerson? Yes we can.”  Ryerson didn’t have to do anything – the controversy wasn’t at their institution.  But they showed excellent leadership in addressing the issue head-on and set a great example for other libraries to follow.

Have a plan
A former boss of mine brought the idea of project charters to our work culture. These documents clearly outline goals, deliverables, in/out of scope areas, rationale, stakeholders responsibilities, scheduling considerations, timelines, risks & mitigation strategies, and sustainability. The most recent project charter around my library is related to the evolution of our print collection, which we’re looking to reduce over the next 5 years (evidence-based, of course – shout out to the COPPUL Shared Print Archive Network (SPAN) Project and GreenGlass. Having a project charter allows all librarians and staff to consider the direction of the project, have their input, and get on board with it. And this last point is key – if employees don’t feel consulted and buy-in, it’ll be especially hard to support that decision externally.

I’ll also highlight the “risks & mitigation strategy” section of project charters. This is where risks are identified.  For example, if the decision you’re undertaking is related to reducing the number of journals or print books, you’re probably going to attract negative media attention and dissatisfaction among print-book lovers. This shouldn’t come as a surprise, so plan for it.

Show due diligence and evidence.
I’ve mentioned above that it’s important to give your community plenty of opportunities to talk to librarians about large decisions and to get assistance for mitigation strategies (if necessary). But it’s also important to keep a record of how you did this. I once had an irate student group demand a meeting about a database that was cut over a year prior. When I laid out the process (multiple meetings and emails) and the evidence (rubric numbers, faculty input) used to make the decision, as well as how the library devised alternatives to accessing the cut database’s content, the students were completely satisfied and even apologized for assuming that the library was in the wrong.

At my library, we use a rubric to assess all our databases. Aside from helping us make decisions, this rubric enables us to “show our work” to faculty. When they see that we look at 28 different categories for every database, they’re more inclined to trust our decision-making than if we showed up with limited evidence and told them we need to cancel a database.

Do some anger aikido (if appropriate)
When our library started a consultation with several faculties on cutting a particular database, some of our faculty were understandably upset. But because of prior conversations about the problems in scholarly communications, instead of turning that anger on librarians it was directed at admin for not funding the library better, and at a parasitic scholarly publishing industry. When the latter came up, it gave librarians an opening to talk about the importance of Open Access, and helped convince some faculty to submit their work to our institutional repository to ensure it would no longer be behind a paywall.

I’m not saying that you shouldn’t take the blame if it’s legit your fault.  If you messed up, own it and don’t throw your colleagues under the bus. But when librarians are doing the best they can – being proactive, using evidence, starting conversations early – and factors beyond our control are the source of anger, I think it’s acceptable to do a little aikido and redirect the anger toward the source (especially if change can be made there).

People may not like a decision but they will usually respect your position if you’ve shown yourself worthy of respect.
Difficult decisions are easier to respect when they’re argued for by trustworthy, respectful, diligent people who have a track record of working on behalf of the community instead of for personal gain.  Be one of those people. As one of my favourite sayings goes, “We’re all smart in academia. Distinguish yourself by being kind.”

Consider the Spectrum of Allies.
I was first introduced to the Spectrum of Allies in the Next Up social justice leadership program, but it’s applicable to any controversial subject. The idea is that you’re not going to radically shift people from opposing an idea to loving it, so it’s better to think of nudging people from one position to the next.

Applying this concept to the decision to cancel a database might look something like this: active allies are those that will advocate for the decision; passive allies are those who agree with the decision but don’t speak up; neutrals are those who don’t care either way; passive opposition is people who oppose the decision but don’t speak up; and active opposition are the folks who are outspoken critics of the decision.  The point becomes to shift each group one position over.  So you may not be able to convince Professor Y that they should support the decision to cancel the database, but you might convince them to move from active opposition to passive, thereby not running to the press.

Accept that some people will be unreasonable.
Some people are just jerks. There will be nothing you can do to satisfy them, and there’s no point in dumping endless energy into convincing them of a decision.  But you can try to neutralizing their influence. If one particular member of a department is being an unreasonable PITA (Pain In The Ass), make sure you’re talking to their departmental colleagues so that those folks aren’t swayed by the individual’s influence.

Build an evidence-based culture.
If your library becomes known as a department that does assessment well and can provide valid evidence, you’ll garner respect on campus which can only make life easier.

Study the people around you who are good at conflict.
Tap into the wisdom of your diplomatic colleagues.  I’m lucky enough to have one who is United Nations-level good at diplomacy, and so when I found myself in a situation where my natural un-diplomatic impulses were about to take over, I’d ask myself “What would (name) do?”  After several years of this practice, I now cut out the middle step and just act tactfully in the first place most of the time (“fake it ‘til you make it” really does work!) But I’d have never been able to get to this place without watching and learning strategies from my co-worker.

At the end of it all, reflect on how it went and what you can do better next time.
Big decisions, like canceling journals or doing significant weeding, are difficult and hard to make roll out perfectly. It’s important to reflect on what’s gone well in the process and what can be improved upon for next time.

What’s your experience with rolling out controversial decisions?  Is there something that should be added or subtracted from this list?  Let me know your thoughts in the comments.


This article gives the views of the author and not necessarily the views the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

(Small) public libraries do research too!

By Meghan O’Leary, MLIS, Collections and Reader’s Advisory Librarian, John M. Cuelenaere Public Library

Last October I attended the Centre of Evidence-Based Library and Information Practice Fall Symposium and quickly came to the realization that I was the only public librarian in attendance and the year before that there were only two of us. Almost all the presentations were geared towards special or academic libraries, which got me thinking, “Hey! Public librarians do this kind of research too!”

Of course, public libraries do research! Admittedly, research in the LIS discipline is dominated by academic librarians. Even research about public libraries tends to be done mostly by academic librarians. Why is that? Public librarians do not need to publish in the same way that academic librarians need to, but why don’t we publish more research? Do we not have the time or funding? Do we not consider what we do as research worth publishing? These are important questions, but not what I want to discuss today.

What I do want to talk about is what small public libraries, specifically the one I work at, does as far as research is concerned. But, first, some background information. I live in Prince Albert, Saskatchewan and work as the Collections and Reader’s Advisory Librarian at John M. Cuelenaere Public Library. Prince Albert has one full branch and one satellite branch out on the west side of the city and a population of roughly 40,000 people. Compared to Saskatoon, Regina, Edmonton, Calgary, etc. we are a rather small library.

Small public libraries, like mine, do engage in research. However, the research we do is generally not seen as “traditional” research because data collection is usually an ongoing process and we often do not share it with the LIS community. Matthews (2013) offers a model of “Try, Assess, and Reflect” for public libraries embracing evidence-based librarianship and says, “try something, gather some data about the effectiveness of the change, and then make some adjustments” (p. 28). Here’s an example of how we used this model: A couple of years ago we looked at what other libraries were doing and made the decision to launch a small video game collection. After a few months, I gathered statistical information about the new collection. Based on that we tweaked how we were doing things. Some of the items were not being returned, so we limited checkouts to two games per patron. E-rated games were being used more than M-rated games, therefore I altered my buying habits accordingly. Each month I gather statistical data on the whole collection to see what is being used, what is not being used, and what current trends are.

That is an example of how small public libraries use quantitative research methods to guide change; however, there has been a shift in research trends in the LIS community from quantitative to qualitative methodologies. Another project I want to talk about is our most recent strategic planning project. It has been ongoing for a few months now and we have done various different types of information gathering. We use statistical data like gate counts, usage stats, website metrics, etc. to guide us in creating a new strategic plan, but we also had three separate strategic planning sessions where we gathered qualitative data. Our first session was with the members of our board and library management, the second was with the rest of the library staff, and finally, the third session was held with the public. The major topics up for discussion were Facilities, Technology, Collections, Programs, and Community Outreach. The topics were written on large pieces of paper posted around the room, then everyone who attended the session was given a marker (and a cookie, because you have to lure them in somehow) and asked to go around the room and write their ideas under each heading. Each session built on the previous session and we analyzed the information gathered and have started developing a work plan which will target each of the major points. The information gathered has already helped us with the designs for our renovation project, as well as with our budget allocations.

I could write more about the various types of research small public libraries, such as John M. Cuelenaere Public Library, do but I do not want to turn this blog post into an essay! If there are any Brain-Works blog readers out there who are also from public libraries and conduct other forms of research please comment! I would love to hear what other public libraries (large or small) are doing.

Resources

Matthews, J. R. (2013). Research-based planning for public libraries increasing relevance in the digital age. Santa Barbara, CA: Libraries Unlimited.


This article gives the views of the author and not necessarily the views the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Is It Possible To Develop An Evidence-Based Complement Plan?

by Frank Winter
Librarian Emeritus, University of Saskatchewan

Although not typically phrased as such, librarian labour – what it is, how much of it a library has, how best to deploy it – underlies the ongoing discussion of how best to deliver the services needed by the host institution. Opinions abound on what is needed and what is desirable. Proposals for new or modified roles and services are often received as something that can only be achieved using incremental resources rather than by internal reallocation, a stance based on the oft-voiced assertion that each member of the current complement is already more than fully occupied with existing responsibilities.

The common unit of analysis in these discussions tends to be the position held by an individual librarian, considered as an indivisible worker. But any position and its associated responsibilities is made up of a bundle of tasks. Considering workload and complement in a position-based sense can forestall systematic discussion about options for assigning or reassigning tasks up, down, sideways, out (in terms of partnerships and collaborations, and/or outsourcing, and/or assigning to a different group of employees in the library or elsewhere on campus), redefining the task, utilizing technological options, or not doing the task at all. These are all part of the standard toolkit when dealing with short term situations such as leaves and vacancies and perhaps longer term situations such as downsizing but seem not to be typically part of the discussion when discussing longer term complement plans.

Complement plans are assembled from many component parts. Although there is typically a great deal of professional judgment that goes into complement planning, it is often individual, implicit, and fraught with the usual power dynamics of any group process and all the other pitfalls of planning and decision-making.

Is it possible to employ the processes and tools of evidence-based library and information practice (EBLIP) to develop a complement plan that would address some of these challenges and produce a robust planning document? A quick review of the relevant evidence-based literature suggests that such an approach has not yet been reported but might be productive.

What would such a process look like using the 5 As (Articulate, Assemble, Assess, Action, Adapt and the interactive process of their use) outlined by Denise Koufogiannakis (2013) together with her description of what types of evidence are typically considered in library research as well as the “institutional, group-driven decision making” framework typical of library organizations? Constraints of space make a full discussion of each A impracticable but a quick sketch might be helpful as a starting point.

* Articulate

Koufogiannakis sketches out several points but it is important to recognize that a complement plan addresses the allocation of one subset of a library’s resources – librarian labour. As with every proposed resource allocation it is a political document incorporating budget choices that reflect values.

* Assemble

There is a wealth of riches with respect to potentially relevant evidence. Many of the sources would typically be included in the Environmental Scan section of any Strategic Plan. What EBLIP provides is clarity of purpose in the Articulation stage and focus in assembling evidence at this stage. If the assembled evidence does not, at the Assessment stage, reveal enough about the librarian labour involved, then the evidence-based approach requires an iteration of this stage.

* Assess

Assessing the evidence is the next step in EBLIP. The standard criteria of credibility and validity apply as well as issues of relevance and context. Ensuring that at the Assemble step there is as much depth, breadth, and context as possible in the assembled evidence will aid in assessment. Transparency and inclusivity during the discussions are also important elements at this stage.

For example, although evidence from comparator libraries is often considered it is actually quite tricky to find true comparators. It is very important to be very aware of similarities and differences and what specific tasks and responsibilities are included and not included and the extent to which they might be distributed among others in the library and on campus. It is not particularly helpful to assume what any library or librarian is doing based on what is described on home pages or position titles. The arbitrariness of organizational structure on campus and within libraries sometimes makes it challenging to map apples to apples. At a minimum, personal contact should be made to ensure that the full situation is known. On the other hand, if a comparator library with approximately the same complement of librarians and roughly the same organizational mission is responsible for services not supported by the local library, then further investigation is needed to discover how that other library distributes the responsibilities among their librarian complement. If a smaller university library delivers the same or even an expanded array of librarian-related services then that, too, merits further investigation and perhaps further iteration of the Assemble stage.

It is necessary to assess the potential impact of the evidence on “the Library” and the librarians. Impacts range from measurable and substantial through to insubstantial and unmeasurable.

Evidence from existing librarians must be weighed to distinguish anecdotal empiricism and self-interest from credible evidence.

Another step to take at this point is to be clear about the appropriate unit of analysis when assessing evidence. It is not helpful to view “The Library” – either local or comparator – as an undifferentiated lump. It is more appropriate to disaggregate “The Library” into a bundle of things (work groups including librarians, physical locations, and so on) responding to differing user needs. This step will help in the assessment of what works and what won’t and why. What might work in one area of a library might not be appropriate in another. This avoids the trap of trying to find one size that fits all.

* Action

Getting to agreement is obviously another critical step in the development of a complement plan. Koufogiannakis describes a number of criteria but it is her articulation of the outcome of this step that is important: Determine a course of action and begin implementation of the decision. If no action results from the work above (and acknowledging that a considered conclusion that no changes are desirable is a possible outcome), then arguably the process has been pointless.

In this respect, it is interesting to read the recent blog posting by Roger Schonfeld entitled Shaping a Library by Linking Planning and Budgeting, and the associated comments (2016). Even for the largest libraries, librarian complement is typically a slowly evolving resource if viewed as being composed of positions. Alternatively, for smaller academic libraries changing just one position can be a major and rare action in the overall composition of the complement. The Schonfeld posting highlights librarian time – a more fungible resource than positions – as the productive unit of analysis.

* Adapt

Have the goals and outcomes of the process resulted in what was anticipated to be their effect – the allocation of librarian labour to most effectively meet the current and emerging information needs of library users? If not, why not? At least one possible outcome at this stage (very much institution-dependent) is a conclusion that there is a diminished need for librarians labour. If this is the case, it makes for a pretty gloomy complement plan going forward. And so, the planning cycle returns to the Articulation stage.

In conclusion, the 5 As of EBLIP in addition to the collegial decision-making style typical of libraries seem quite suitable to the development of useful librarian complement plans.

References

Koufogiannakis, D. (2013). EBLIP7 Keynote: What we talk about when we talk about evidence. Evidence Based Library and Information Practice, 8(4), 6-17 doi:http://dx.doi.org/10.18438/B8659R.

Schonfeld, R. (2016, November 7). Shaping a library by linking planning and budgeting [Blog post]. Retrieved from http://www.sr.ithaka.org/blog/shaping-a-library-by-linking-planning-and-budgeting/

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

A small experiment to improve Facebook engagement

By Joanna Hare
Run Run Shaw Library, City University of Hong Kong

As I am sure is the case at many academic libraries, I am the sole person responsible for maintaining the Library Facebook Page. This means that a lot of my time is spent planning and scheduling content, with not as much time as I would like spent collecting evidence for the purpose of improving content. I regularly check and download Facebook Insights reports to keep an eye on how our page is doing, and of course I always pay attention to how much interaction a particular post is getting through comments, likes, or shares. Recently, however, I trialed a small experiment to see if I could improve the performance of a particular type of post: a weekly link to the Library’s Books of the Week blog.

Books of the Week is a place to share recommended Library books, usually related to a current event such as the Olympics or the beginning of semester. In the past, a feed was created so all new blog posts would be automatically posted to the Facebook page. This was causing a number of problems, such as the timing and number of posts becoming unpredictable, and the posts being poorly formatted in Facebook. Most importantly, the Facebook posts coming automatically from the blog were getting zero engagement, and the Reach of the posts was very low. A change was clearly needed.

I decided to stop the blog posting automatically to Facebook, and manually post the item myself. I created a simple graphic to be used each week, and posting manually meant I could write the accompanying status to be more timely and unique. Even though manually posting the item each week only takes a few minutes, in terms of my job description and job performance I knew I would need to justify if this increased manual work was worth the effort.

Based on an experiment described in this article, I started a log of the variables when posting Books of the Week each week. The log included a link to the post, a description of the post such as the image dimensions, length of the accompanying status, and the time and date of the post. Then, each week I recorded the basic units of measurements for the post provided by Facebook: the Reach and the Post Clicks. I was less interested in likes, comments, and shares in this instance – of course I kept a record of them in my log – but metrics like Reach and Post Clicks are sufficient to see if people are engaged with your content without taking the extra step to ‘like’ a post: “…just because someone didn’t click on your specific post, if that post encouraged them to click anywhere else on your page, you’ve done a good job!” (Cohen, 2014)

For the first four weeks, I saw marked improvement in terms of the Reach, rising from 43 in the first week to 185 by the fourth week. At this point, I tweaked the method of posting. Rather than posting a link then adding the graphic as an attachment, I posted the graphic as a photo, with an html link in the description. Crucially, after digging into my Insights reports I found Facebook categorises the first type of post as a ‘Link’ and the second type as a ‘Photo’. The difference is very small in practice, and looks like this:

figure1_sample-fb-posts_jhare
Fig 1: Image on the left shows a ‘Link’ post type. The second image shows a ‘Photo’ post type.

After making this change, the increase in the post’s Reach was remarkable – the figure jumped to over 500. Over the next 6 weeks I continued this method of posting, and the posts consistently reached over 800 users. Once in the six week period I reverted to the first method, and the Reach dropped to 166. I returned to the second method, and the Reach increased again, which has remained at or above 800 since I stopped keeping a record of the variation each week.

Much of the literature and the marketing material about using Facebook recommends that page managers use images to engage their audience, so I suppose these results are not surprising. I did not however expect there to be such a difference in Reach simply because my post originated as a ‘Photo’ rather than a ‘Link’, when the content is essentially the same.

The general visibility of the posts was much improved with this method, but the change in the actual click through rate to the blog was less dramatic. On average around 5 people each week clicked on the post. My Insight reports show 2-3 of the clicks were to expand the image or description, while on average 0-1 people clicked the link to visit the blog. Quite disappointing!

Despite this, I do not think the exercise was in vain. Firstly, seeing for myself that images truly do have a greater Reach according to Facebook’s algorithm is useful for all future posting practices. Secondly, I think it is valuable to have our posts become more visible on Facebook, increasing our presence on the platform in general. It seems that the manual effort (which is really only around 10- 15 minutes each week – especially now as my colleagues assist in drafting the text and modifying the image!) is worthwhile given the marked increase in the post’s Reach, and the small increase in people who are clicking on the post. This is just a small scale way of using Facebook Insights, and in future I hope to use Insights more strategically in designing and delivering the Library’s Facebook content. In the coming weeks I will be experimenting with a more coordinated approach to Facebook including a paid advertising campaign, and I look forward to sharing some of the results with the C-EBLIP community.

References:

Busche, L. (2016, February 20). 10 Marketing Experiments You Can Run Yourself to Improve Your Reach on Social Media. Retrieved September 27, 2016, from https://designschool.canva.com/blog/marketing-experiments/

Cohen, D. (2014, August 6). Post Clicks, Other Clicks Are Important Metrics for Facebook Page Admins, Too. Retrieved September 27, 2016, from http://www.adweek.com/socialtimes/post-clicks-other-clicks-are-important-metrics-for-facebook-page-admins-too/300388

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Personality types and the 360° survey for professional development, Or “Being a Bulldozer and Still Winning Hearts”

by Tegan Darnell
Research Librarian
University of Southern Queensland, Australia

This short article is about how evidence-based practice applies on a personal/individual level, and how I’m using the outcomes of survey tools for reflective professional development.

As part of an ongoing leadership development program, I have completed a Myers-Briggs Type Indicator® (MBTI), and the slightly less well known 360° Life Styles Inventory™ (LSI). Both are evidence-based and meet rigorous academic and psychometric standards.

Although reluctant to be categorised, I have committed to ‘develop’ my practice and become a better and more effective leader. I endeavour to take what I have learned from this and use it in my practice.

The MBTI® told me I am an ENTJ type, or ‘the commander’, closely correlated with the ‘Fieldmarshal’ in the Keirsey Temperament Sorter (KTS). An ENTJ can be dictatorial, abrasive, and insensitive. Notable ENTJ types include Margaret Thatcher, Vladimir Putin, Steve Jobs, and Gordon Ramsey (the British chef that swears at people a lot).

It isn’t all bad… Basically, an ENTJ is a natural born leader with a ‘forceful’ personality. Also, Intuition (N) types have been shown to have significantly higher ego development (Vincent, Ward, & Denson 2013) – Apparently that is a good thing.

Last time I took the same test I came out as an INFJ, or ‘the advocate’, (the ‘counsellor’ according to the KTS) so my new result was somewhat of a shock. As a committed researcher-practitioner, however, I have to accept what the data is telling me. Quite a lot of time has passed since I last took the full questionnaire…

However, it was the 360° survey that was the most revealing.

In a 360° degree survey not only do you complete a survey about your behaviours, but so does your boss, your direct reports, and peers and colleagues. The differences between your self-evaluation and the perceptions others have of you are revealing.

I am a human bulldozer.

My colleagues rated me as unhealthily competitive, approval seeking, and lacking in actual performance. Apparently I disregard others’ feelings, and come across and cold and insensitive. Positive areas include: My colleagues see me as an independent and unconventional colleague, and I am nowhere near as avoidant as I thought I was.

Sometimes, the evidence does not say what you want it to say. When the data is about a beloved Library service or resource this is hard to take. When the data is about your personal performance and behaviour, this can be particularly difficult to reconcile. But, as will all research, I have asked the question, I have collected the data, and I have the results. Now, with this information, I need to make a decision about what action I take.

An ENTJ appreciates and welcomes objective and rational statements about what they do well and what could be done better. Criticisms, to me, mean: “Here is your challenge. Do whatever is in your power to make it happen”. So, I have accepted this challenge.

Being a bulldozer was never my intention, but if I am a bulldozer, I’ll be a bulldozer with friends, thank you. I’ll be working on my ‘encouraging’ and ‘humanistic’ behaviours, and doing lots of open communication (ie. ‘listening’) over the next few weeks.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Member-Driven Decision Making

by Gwen Schmidt
Manager, Branches, Saskatoon Public Library

How many librarians does it take to change a lightbulb? If you can believe it, there are at least four different answers to this joke. My favourite is “Only one. But first you have to have a committee meeting.”

I have just finished a two-year term as the President of the Saskatchewan Library Association (SLA), following a period of previous involvement on the Board. It has been a fascinating time of change in our organization’s history, with a reshaping of our direction, our governance, and our style of decision making. I was glad to be a part of it.

One of the most interesting things about this time of change was the renewed SLA commitment to a member-driven philosophy. In 2010, the SLA membership determined that our Board structure needed an overhaul, and a Board Governance Task Force was struck. The Task Force took a look at our history, our values, our goals, and the challenges ahead, and set us on a new path with a new vision – central to which was the idea that our members would lead us.

We were always trying to be member-driven, but this renewed commitment to that idea came at a time when cheap/free consultative software tools exist in abundance, and when social media has given individuals the expectation that they can have their say easily. It was easier than ever before to make ‘member-driven decision making’ a reality.

My presidency fell during a time when strategic planning needed to be done. Instead of just doing it at the Board level, we did a broad survey of the members to find out what they think SLA should be doing, what they need from their library association, and what they think we do well. Once we had that data, we also took the opportunity at the annual conference to have conversations with people in person about it. We took broad themes out of the survey data, and had an in-person membership consultation where people could expand on those themes. All of that consultation helped us to build a robust strategic plan that is taking us forward.

During the same time, provincial and territorial and national library associations across Canada were considering the building of a new national federation together, which ultimately became the Canadian Federation of Library Associations (CFLA). SLA was invited to participate. Our member-driven philosophy set a road-map for us: before committing to participation, we took the question to our members. Did they want us to go forward as part of that federation? If yes, within what parameters? Our members gave us a resounding mandate, and endorsed a set of parameters going forward. Consultation with them throughout the process of building the CFLA identified a problem to be solved with a shared Saskatchewan-Manitoba CFLA Prairie Provinces Representative position. Knowing what our members wanted allowed us to set up a Saskatchewan-Manitoba working group, to determine the structure of the Prairie Provinces rep, to ensure strong communication and representation.

In associations, ‘member-driven decision-making’ sounds a little – or a lot – like evidence-based decision making. Instead of doing what we think they want us to do, we ask them what they want us to do and then do that. Those collaborative conversations take time, but ultimately build trust and energy, and give better results in the end.

How many member perspectives does it take to make an association truly shine? A heckuvalotta them. But that makes the future bright.

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

Evidence Versus Intuition (Which is Really, de facto, Evidence)

by Gwen Schmidt
Outreach Coordinator, Saskatoon Public Library

I have never been a really great researcher. When I was in library school, our Research Methods class brought me to tears more often that I would have liked. Surveys and statistics and research papers, bah humbug.

What I am good at is patterns. Patterns in nature, patterns in process, patterns in human behaviour. A really intricate visual pattern will actually make the hair stand up on the back of my neck. I will be entranced. I have always been this way.

Lots of librarians find their way to this career of librarianship because they love books. Don’t get me wrong; I read so many books as a kid that the library was my second home. I still read a lot of books. But what attracted me to the library most is the patterns. Call numbers. Classification schemes. Interlibrary loan processes.

In my 20 years as a professional, I have become a person who “is good at deliverables”, as my last Manager would say. I can build a process that is lean, sensible, efficient, and understandable. I have also become a connoisseur of human behaviour. I enjoy watching the patterns, and I can get a lot done by anticipating how people will behave in certain contexts.

So, when someone says the phrase ‘evidence-based library and information practice’ to me, two things happen: first, I get anxious and hyperventilate about research papers, and surveys, and statistics, and then I stop myself and start to wonder if ‘evidence’ means different things to different people.

I would like to posit that intuition is as important as evidence in decision-making, and that intuition is, in fact, a type of evidence. If you pay close attention every day to the work that you do, your brain starts to see patterns in workflow, in policy interpretation, and in how humans interact with your work. This is the ‘ten thousand hours’ of attention or practice that Malcolm Gladwell talks about in his book, Outliers – the attention and experience that make people really good at something.

Some libraries live by a self-imposed rule that all of their decisions need to be evidence-based, and this often means an environmental scan nation-wide, reading research papers, doing surveys, crunching statistics, and writing reports, all before that decision is made. I would suggest that sometimes there is not enough time to do all of this, and then intuition and years of paying attention need to come to the fore. Neither one is always a better approach, but both approaches need to be in your toolbox.

This is why you might do a bunch of quality formal research before you build a proposal, but you also need to run it past the people down on the ground who work with the processes every day. They can tell you whether or not your proposal is grounded in reality, and whether it will fly or not. They live and breathe where those processes will play out.

Do you need examples to know what I mean? Let’s get granular. At the public library, I have created a lot of programs that resonate with people, and a lot of these I developed using my gut instincts.

I have been programming for years, and, let me say, there have been a lot of duds. Every well-attended or poorly-attended program is a learning opportunity, though, I always say. An opportunity to pay attention. Why did it work? Why didn’t it work? Why do other librarians’ programs work? What are the goals I am trying to accomplish in the first place, and how did this program accomplish those goals or not? What did library patrons say they wanted for programs, but also what programs did they actually show up for? What little things annoy people? Make no mistake: the intuitive approach needs to be fairly rigorous if it is going to work.

If people come to a program, I call that ‘voting with their feet’. After a few years of paying close attention to human behaviour related to programming, and also paying close attention to the things that annoy all of us, the patterns started to emerge for me. Here’s what I know.

Teens are way more engaged in a program if you give them lots of responsibility and make them do all the work. This sounds kind of unbelievable, but it’s true. They do not need us to deliver them fully-formed content to enjoy passively – they can get that from TV or the Internet, and it will always be better than anything we can do. What they need is a challenge or an invitation to create. Since we started to program for teens on this concept, my library has had amazing success with the “Teen Poetry Celebration” (teens write poems), the “We Dare You Teen Summer Challenge” (a literacy scavenger hunt and activity challenge), “Teen Advisory Councils” (teen library club), and most recently the “Book Trailer Contest” (teens make video trailers for books). We get good attendance numbers, and the teens build amazing things.

Other groups of people have patterns too. Most people are too busy to get to a program on a particular date, but they will start to trust you if the program happens repeatedly in a predictable fashion and they don’t have to register. I used to run one-off programs, and sometimes people would come and sometimes they would not. At the same time, a weekly drop-in armchair travel program and weekly drop-in children’s storytimes across the system would attract 20-90 people each time. Why wouldn’t I set up important programs in a weekly, drop-in (no registration hurdles) format? So that’s what we did. We built a weekly drop-in program called “BabyTalk”. Weekly drop-in works for moms and babies, because there is no stress if they miss it, and they can choose to attend at the last minute. I currently run a weekly drop-in program called “iPad Drop-In”, for seniors. The seniors tend to come over and over again, and start to get to know each other. They will also let us teach them things that they would never come to a one-off to learn (e.g. How to Search the Library Catalogue). We get about sixteen people each week with very little effort. It is lean, sensible, efficient, and understandable. The only other thing we need to do is to make sure that we deliver a great program.

These are only a few of the intuitive rules that I live by in my job. Intuition based on watching seniors vote with their feet, watching moms and babies get in the class or not get in the class, teens participate or not participate.

With current developments in neuroplasticity research and the explosion in social media use, there are a ton of popular psychology books out about paying attention, mental focusing, and intuitive decision-making. So, is intuitive decision making a form of evidence-based librarianship? I think so, based on all the patterns I’ve seen.

(I am currently reading “Focus: The Hidden Driver of Excellence” by Daniel Goleman.)

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

EBLIP and Public Librarians: A call to action!

by Pam Ryan
Director, Collections & Technology at Edmonton Public Library
pryan@epl.ca / Twitter: @pamryan

As a former academic librarian, I’m often asked what the biggest differences are between public and academic libraries and librarianship. My short answer is usually something about having only worked for one (each excellent and probably non-standard) example of each so it’s difficult to know if the differences I’ve experienced are more organizational or sectoral. However, an increasingly concerning difference is the relationship that public librarians have with the research and evidence base of our profession.

Low public librarian participation in research and publication is not a new phenomenon nor is the small overall percentage of LIS research articles about public library practice. Research in 2005 showed that over a four year period just 3% of article authors in North American LIS journals were employed in public libraries. Even in Public Library Quarterly, only 14% of the authors were public librarians. An earlier study in 2001 showed that only 7% of LIS research articles were public library orientedi.

The recommendations in the 2014 Royal Society of Canada Expert Panel report on Canada’s libraries call for increased sharing of research and statistics to support evidence-based practice in public libraries. The recommendations specifically include a call to action for public libraries to make their work visible by posting evidence-based studies on library websites for the benefit of the entire library community, in addition to continuing to share statistical data freely with CULC and other organizationsii.

These recommendations follow from the fact that public libraries are increasingly called upon to show their value and prove their impact yet we are not actively in charge of telling our own story by sharing our organization practice findings or enlisting our librarians to share their work outside of internal operational functions. We need to heed this call to action both as organizations and as individual professionals. I am keenly aware of all of the good program evaluation and assessment work that goes on in public libraries to inform services and innovation yet it is too frequently not taken the step further, to openly available publication, to build our evidence-base, inform our collective practice, and be available to tell our stories.

Of particular note in this call to action is to openly and freely post this work of our public libraries and librarians. A very distinct and frustrating difference between academic and public librarianship is access to the literature behind paywalls. I am well-aware of how frequently I beg sharing of PDF articles of academic colleagues and also, embarrassingly, how less frequently I dip into the literature because access to it isn’t as seamless as it was when I was an academic librarian. Open Access publishing options for our own literature needs a much higher profile than it currently has and is something our entire sector needs to work on.

Where to start? As examples, Edmonton Public Library (EPL) recognizes that research and its dissemination are integral to being innovative. EPL provides two recent librarian graduates from the University of Alberta’s School of Library and Information Studies with one year research internships. These new professional librarians conduct research that is invaluable to EPL’s future planning. Recent assignments on digital public spaces and open data; digital discovery and access; 21st century library spaces; and analyzing the nature and types of questions received at service desks have also included the expectation of openly sharing internal reportsiii via the EPL website, as well as publication in Open Access forumsiv v vi vii. Librarians working on innovative projects are also encouraged to share their practice and findings openlyviii ix. Providing the encouragement, support, time, and expectation that sharing need be an integrated part of public librarian practice is something all libraries can foster. We need to collectively take responsibility for changing public library culture and take ownership of telling our own stories and sharing our evidence.
________________________________________________________________
iRyan, Pam. 2012. EBLIP and Public Libraries. Evidence Based Library and Information Practice. Vol 7:1. https://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/16557/13672

iiDemers, Patricia (chair), Guylaine Beaudry, Pamela Bjornson, Michael Carroll, Carol Couture, Charlotte Gray, Judith Hare, Ernie Ingles, Eric Ketelaar, Gerald McMaster, Ken Roberts. (2014). Expert Panel Report on The Future Now: Canada’s Libraries, Archives, and Public Memory. Royal Society of Canada, Ottawa, ON. Pg. 120. https://rsc-src.ca/sites/default/files/pdf/L%26A_Report_EN_FINAL_Web.pdf

iiiPublications. Edmonton Public Library. http://www.epl.ca/about-epl/news/publications

ivArnason, Holly Kristin and Louise Reimer. 2012. Analyzing Public Library Service Interactions to Improve Public Library Customer Service and Technology Systems. EBLIP and Public Libraries. Evidence Based Library and Information Practice. Vol 7:1. https://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/11654

vWortman, Beth. 2012. What Are They Doing and What Do They Want: The Library Spaces Customer Survey at Edmonton Public Library. Partnership: the Canadian Journal of Library and Information Practice and Research. Vol 7:2. https://journal.lib.uoguelph.ca/index.php/perj/article/view/1967/2633#.Vh1gAU3lu70

viDaSilva, Allison. 2014. Enriching Discovery Layers: A Product Comparison of Content Enrichment Services Syndetic Solutions and Content Café 2. Partnership: the Canadian Journal of Library and Information Practice and Research. Vol 9:2. https://journal.lib.uoguelph.ca/index.php/perj/article/view/2816#.Vh1p4U3lu70

viiCarruthers, Alex. 2014. Open Data Day Hackathon 2014 at Edmonton Public Library. Partnership: the Canadian Journal of Library and Information Practice and Research. Vol 9:2. https://journal.lib.uoguelph.ca/index.php/perj/article/view/3121#.Vh1f3U3lu70

viiiHaug, Carla. 2014. Here’s How We Did It: The Story of the EPL Makerspace. Felicter. Vol 60:1. http://www.cla.ca/feliciter/2014/1/mobile/

ixCarruthers, Alex. 2015. Edmonton Public Library’s First Digital Public Space. The Library as Incubator Project. January 20, 2015: http://www.libraryasincubatorproject.org/?p=15914

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.

The appropriation of evidence based terminology by vendors

by Denise Koufogiannakis
University of Alberta Libraries

Over the past few years, I’ve noticed an increasing number of products being marketed to librarians as “evidence based” tools for improving our decision making. Vendors seem to be hooking onto the growth and acceptance of evidence based practice within librarianship and are marketing their products as such. They are wanting to appeal to those who see value in data as a driver for decision making.

I recently looked into this more formally (see my EBLIP8 presentation from July of this year) and found two different types of products being promoted as “evidence based”:

1. Data gathering tools for collections analysis – these products are aimed at both academic and public librarians, but there are different products for each. For public libraries, the products focus on information such as circulation and demographic data to aid with management of the collection and new acquisitions. Similar products being targeted to academic libraries focus on collections usage statistics for the purposes of making cancellation decisions, weeding, and showing return on investment. Examples include CollectionHQ for public libraries and Intota Assessment for academic libraries.

2. Evidence Based Acquisition approaches – aimed at academic librarians, “evidence based acquisition” (sometimes called usage-based acquisition) is a relatively new option being presented by publishers, similar to patron-driven or demand-driven approaches. In this model, a group of titles from a publisher (such as all the titles in a particular subject area) are enabled upon commitment from the library to spend an agreed upon amount of money. Following the agreed upon time period, the library chooses the titles they wish to keep, based upon usage of those titles (for more detail see the overview included in the NISO Recommended Practice for Demand Driven Acquisition of Monographs). Examples of this approach can be found with many of the major academic publishers including Elsevier, Cambridge, and SAGE.

The question I ask myself is whether these products are really evidence based? Can they deliver what they promise when they say that they will improve collection management, make librarians’ jobs easier, help with decision making, save time, and provide dependable, high quality service? I guess it is the evidence based, critical side of me that is doubtful.

EBLIP is a process that asks us to consider the whole of the evidence when making a decision. To try and determine what the best evidence is. To try and see a complete picture by bringing together different evidence sources when making a decision. EBLIP is an approach to practice that is considered and reflective. Conversely, these products are meant to convince us that because they are called evidence based they will magically take care of all this hard work for us!

None of this is to say that the products are bad. In fact, they seem to offer potentially useful ways of drawing together data for collections and acquisitions librarians to use, or a model for acquisition that may actually prove to be a good one for many libraries. In short, what I see in these products are individual pieces of evidence that may be useful to aid with decisions, but certainly will not be a complete answer.

What we should all consider is the appropriation of evidence based terminology. This appropriation probably means that the EBLIP movement has become sufficiently recognized as integral to librarianship, to the point that its terminology is now selling vendors’ products to librarians, using the discourse of the movement. Referring to a product as evidence based lends credibility to it. If accepted as evidence based, the product’s profile is raised in comparison to other products, which may then be regarded as not being evidence based, even though they may certainly be just as evidence based as the products being marketed as such. This use of the term has been too easily allowed to be applied without question.

EBLIP as a way of approaching practice is far more complex than what these products can offer. If they hold some piece of information that helps you with the process, great! But don’t think your job ends there. Just like all products, the types of products I’ve described above need to be assessed and tested. To state the obvious, do not rely on the evidence based terminology used by the vendor. If it does something that makes your work easier, then by all means use it. But no product will be a magic solution. Above all, let’s test these these products and determine how evidence based they actually are. How much will they help us advance the goals and mission of our Library? Let’s make sure they live up to what they say they offer, and place whatever they do offer in the larger context of overall evidence based decision making within collections.

Let’s not rely on vendors to tell us what is evidence based – let’s figure it out ourselves. We need to do more testing and critically examine all these products, document and share what we learn with one another. Here are a couple of examples that may help you with your own examination:
Buying by the bucket: A comparative study of e-book acquisitions strategies.
Evidence based acquisitions: Does the evidence support this hybrid model?

This article gives the views of the author(s) and not necessarily the views of the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.