by DeDe Dawson @dededawson
Science Library, University of Saskatchewan
I often rail against the unsustainability and inequity of the current subscription journal publishing system. We have the technology, the money (if we disinvest from the current system), and the ingenuity to completely re-imagine this system (see Jon Tennant’s recent article – it is short and worth your time!). A new system could be entirely open, inclusive, and democratic: enabling anyone in the world to read and build upon the research. This has the potential to dramatically increase the speed of progress in research as well as its uptake and real-world impact. The return on investment for universities and research funders would be considerable (this is exactly why many funders are adopting open access policies).
So, why is it so hard to get to this ScholComm paradise?
It is a complex system, with many moving parts and vested interests. And getting to my idealistic future is also a huge collective action problem. But I think there’s more going on that holds us back…
Have you ever heard of the analytical technique called The 5 Whys? It is designed to get at the underlying basis of a problem. Basically, you just keep asking “why?” until you get at the root of the issue (this may be more or less than five times obviously!). Addressing the basis of the problem is more effective than dumping loads of time and resources in fixing all the intermediary issues.
I’ve used The 5 Whys numerous times when I’m stewing over this dilemma of inertia in transitioning to a new model of scholarly publishing. I always arrive at the same conclusion. (Before reading on, why don’t you try this and see if you arrive where I always do?)
1st Why: Why is it so hard to transition to a new, more sustainable model of publishing?
Answer: Because the traditional subscription publishers are so powerful; they control so much!
2nd Why: Why are they so powerful?
Answer: Because many researchers insist on publishing in their journals.
3rd Why: Why do they insist on publishing in those journals?
Answer: Because they are addicted to the prestige titles and impact factors of those journals.
4th Why: Why are they addicted to these things?
Answer: Because they feel that their career depends on it.
5th Why: Why do they think that their careers depend on this?
Answer: Hiring & merit committees, tenure & promotion committees, and granting agencies often judge the quality of research based on the prestige (or impact factor) of the journal it is published in.
Of course there are many variations in how to ask and answer these questions. And there are associated problems that emerge as well. But, the underlying problem I always arrive back at is the perverse incentive systems in higher education and the “Publish or Perish” mentality. And of course what this tweet says:
Ok, so now let’s ask a “How?” question…
If academia’s incentive systems are one of the major factors holding us back from transitioning to a more sustainable publishing system then… How do we change the incentives?
The Responsible Metrics Movement has been growing in recent years. Two statements are fueling this movement:
The San Francisco Declaration on Research Assessment (DORA)
Leiden Manifesto for Research Metrics
Each of these statements advocates for academia to critically examine how they assess research, and encourages adoption of responsible metrics (or methods) that judge the research on its own merits and not the package it comes in (i.e. the prestige of the journal). DORA focuses primarily on combating the problem of journal-based metrics (the problems with the Journal Impact Factor are well known), and makes a number of suggestions for action by various stakeholders. While Leiden is more comprehensive with 10 principles. See this video for a nice overview of the Leiden Principles:
Evaluating researchers by actually reading their published outputs seems like an obvious solution… until you are on one of those hiring committees (or tenure/promotion/merit committees, or grant adjudication committees, etc.) and faced with a stack of applications – each with a long list of publications for you to read and assess! Instead, Stephen Curry (Chair of the DORA Steering Committee and passionate advocate in this area), suggests candidates compile a one or two-page “bio-sketch” highlighting their best outputs and community contributions. I recently came across a research centre that is using just such a method to assess candidates:
“…we prefer applicants to select which papers they feel are their most important and write a short statement explaining why.”
From the Centre for Mechanochemical Cell Biology (CMCB)
DORA is also collecting examples of “Good Practices” like this on their website.
In my experience, many researchers are aware of these problems with journal-level metrics and the over-emphasis on glamour journals. It has even been noted that Nobel Prize winners of the past would not likely succeed in today’s hyper-competitive publish or perish climate. But researchers often feel powerless to change this system. This is why I particularly like the last paragraph of the CMCB blurb above:
“As individuals within CMCB, we argue for its principles during our panel and committee work outside CMCB.”
Researchers are the ones making up these committees assessing candidates! Use your voice during those committee meetings to argue for responsible metrics. Use your voice when your committee is drawing up the criteria by which to assess a candidate. Use your voice during collegial meetings when you are revising your standards for tenure/promotion/merit. You have more power than you realize.
Ingrained traditions in academia don’t change overnight. This is a long game of culture change. Keep using your voice until other voices join you and you wear down those traditions and the culture changes. Maybe in the end we’ll not only have responsible metrics but sustainable, open publishing too!
Recommended Further Reading:
Lawrence, P. A. (2008). Lost in publication: How measurement harms science. Ethics in Science and Environmental Politics, 8(1), 9-11. https://doi.org/10.3354/esep00079
Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. BMJ, 314, 498-502. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2126010/
Vanclay, J. K. (2012). Impact factor: Outdated artefact or stepping-stone to journal certification? Scientometrics, 92(2), 211-238. https://doi.org/10.1007/s11192-011-0561-0
P.S. Assessing the actual research instead of the outlet it is published in has implications for the “Predatory Publishing” problem too. Martin Eve and Ernesto Priego wrote a fantastic piece that touches on this:
Eve, M. P., & Priego, E. (2017). Who is Actually Harmed by Predatory Publishers? TripleC: Communication, Capitalism & Critique, 15(2), 755–770. http://www.triple-c.at/index.php/tripleC/article/view/867
This article gives the views of the author and not necessarily the views the Centre for Evidence Based Library and Information Practice or the University Library, University of Saskatchewan.