Scholarly Publishing: crossing the Rubicon – BookMachine, Oxford 7 September 2017

The BookMachine panel discussion on disruption in scholarly publishing took place on the 7th of September in Oxford, sponsored by Ingenta. It was a great event, with lots of food for thought, and I share my notes here. The discussion was chaired by Byron Russell (Ingenta) and focussed on how publishers can stay relevant and useful in the face of new technologies, new ways of information sharing, and changing research models and needs. The speakers were Charlie Rapple (Kudos), Phill Jones (Digital Science), and Duncan Campbell (Wiley).

What is the most disruptive thing in scholarly publishing at the moment?

The panel agreed that actually the publishing industry has not been disrupted. The large companies have not lost market share. There has been consolidation, not disruption, and consolidation might be considered the opposite of disruption. So, to ask a different question: what has been the most innovative thing? One answer is the effects of Open Access. OA has created an “author pays” model. Power has shifted towards the author and away from the library. In response to this, publishers are now focussing on services to authors (data sharing etc).

You can draw a path from evolution to revolution to disruption, and many commentators label things as disruptive which are really only evolutionary or revolutionary. The transition to digital has changed a lot of things, but not disruptively. Scholarly societies are wondering how they can continue to add value, and perhaps technological change has been more disruptive for societies than for publishers?

Why hasn’t scholarly publishing been more disrupted? We are still here. The single most disruptive thing hasn’t happened yet. Publishers rely on the scholarly importance of the version of record. This, by the way, is just as true for OA publishers as it is for traditional publishers. What happens when other versions become valid? We are seeing the rise of pre-prints, and data in multiple formats, both a move away from “the final article”.

Are publishers responding to change?

There is a huge amount going on inside publishing. Publishers need to keep listening to their users (including libraries and societies) and to their changing needs. At the moment, it’s apparent that we are still providing a useful service. We are turning richer article inputs into richer article outputs. Publishers need to listen, and innovate based on needs.

It is very difficult as an incumbent to be innovative. However, it should be said that publishers are fantastic at infrastructure. It was publishers who were behind the development of Cross Ref and DOI for example. Plumbing is not sexy, but it is important. Infrastructure can be innovative. It’s also worth saying that publishers are generally supportive of innovation elsewhere, outside of their organizations. For example, big publishers paid it forward by supporting Kudos in its early days, and were crucial for the company’s development.

Infrastructure is important now, and will become even more so in the future. A current model seems to be that an academic has an idea for something that needs to be fixed, a big company supports them, and then down the line buys them. The risk is that the learned societies and smaller publishers are not getting involved fast enough. The result is that big companies are pulling ahead. Smaller publishers should try and get involved earlier.

Smaller publishers might think that they don’t have the cash to put into investment. However, smaller publishers can be creative, perhaps getting involved in providing feedback in return for discounts for example. It is easier for big publishers, but it is possible for smaller publishers. As an example, Ingenta works with a lot of small publishers, some with only 2 or 3 staff, and introduces smaller publishers to new developments. Collaboration between smaller publishers might be the key (the IPG does a great job with this).

What is the effect of the demise of consortia deals? Are we seeing the end of the Big Deal?

There are pressures, but deals on the whole are hanging on. The slew of cancellations has still not materialised.

Access and discoverability

Library discovery systems are very expensive, but there are alternatives. Google Scholar is becoming more disruptive, and more widely used by scholars who may not even be aware of the library’s own discovery systems.

One large Dutch university killed off their discovery system, and put up a one-page document telling people how to use Google Scholar. Everything was fine. (Although a year later the library signed up to EBSCO’s discovery system, so perhaps there was more to this than meets the eye.)

New technology does not always succeed in its original form, although it can go on to evolve into something different. ReadCube is a good example. This was a kind of PDA system for journal articles, and was well received by libraries. It was like a Netflix for articles, and enabled control of what people could do (print, share). It did not succeed in the end, because publishers were not prepared to investigate the new model. But, the tech behind it has gone on to support a system for content sharing developed by another publisher.

For access to articles, forget Sci Hub, forget Patron Driven Acquisition. Even without access to a university library simply Googling an article title never fails. Green OA works well.

Copyright, sharing and piracy

Where do we draw the line on issues of copyright? In relation to the sharing of articles, there is a difference between individuals sharing stuff (sharing between academics), and a more large-scale attempt to share everything (Sci Hub). The line not to be crossed can be judged by asking who is sharing it and for what reason.

Sharing is fine, but the systemization of sharing is a problem. Sci Hub has forced publishers to face up to the fact that academics already share articles among colleagues. Sci Hub has pushed the agenda, but it is not the answer. Unpaywall is really neat, and legal.

Publishers feel a bit beleaguered when people think that piracy is bad but publishers are worse! What can publishers do? The key thing is that everything we create needs to serve the customer’s needs. The relationship between academics and publishers has been maintained by senior academics, serving on editorial boards for example. But are senior people really in touch with the needs of post-docs and junior researchers? There is a current shift in the research funding agenda, away from disciplinary funding, towards big, multidisciplinary projects (a cure for Alzheimer’s, the exploration of Mars). If people are working in multidisciplinary projects, which journal should they publish in? Will the channels change?

As well as the increasing thematic nature of funding, let’s hope there is a change in how research is evaluated too. The idea that it’s based on where you are published is so wrong headed. There are other outlets. We’ve got to use technology to start looking differently at impact, influence, and reputation.

In the old days, as publishers you never used to encounter a user; you dealt with the bookshop, the library. Things are changing and we need to get on board with the changes. In the rise of the machines (AI) in the sharing and discovery of information, what happens to publishers?

How do external parties view academic publishing?

Interest from outside investors is in scientific information, not in scientific publishing. Publishing is dead, long live scholarly information. Technology, information tools, open science, open data. You can put anything into Figshare and give it a DOI.

How can publishers move forward in ways that their customers might not think they need or want? Publishers might not be thanked for driving things forward, but you can get around that by creating a new brand, trying things out and moving forward but without the reputational risk.

Should publishers get involved with driving tech development in information sharing? The Belmont Forum was flagged as an interesting case. The Belmont Forum’s vision is:

“International transdisciplinary research providing knowledge for understanding, mitigating and adapting to global environmental change.”

Forum members and partner organizations work collaboratively to meet this Challenge by issuing international calls for proposals, committing to best practices for open data access, and providing transdisciplinary training.  To that end, the Belmont Forum is also working to enhance the broader capacity to conduct transnational environmental change research through its e-Infrastructure and Data Management initiative.

This is transdisciplinary research and information sharing, but not “publishing” as we would traditionally recognise it. But publishers should take heart from the fact that there are many things that publishers know how to do that others in the research ecosystem just don’t know. Many people think publishing is easy until they try to do it. The Belmont Forum recently asked publishers for input and advice, which is encouraging. Knowing whether and how to get involved comes back to listening to your community, that’s the way publishers can contribute.