Surveillance Publishing

5 December, 2021

Björn Brembs recently explained how

“massive over-payment of academic publishers has enabled
them to buy surveillance technology covering the entire workflow that can be used not only to be combined with our private data and sold, but also to make algorithmic (aka ‘evidenceled’) employment decisions.”

Reading about this led me to this article:

• Jefferson D. Pooley, Surveillance publishing.

It’s all about what publishers are doing to make money by collecting data on the habits of their readers. Let me quote a bunch!

After a general introduction to surveillance capitalism, Pooley turns to “surveillance publishing”. Their prime example: Elsevier. I’ll delete the scholarly footnotes here:

Consider Elsevier. The Dutch publishing house was founded in
the late nineteenth century, but it wasn’t until the 1970s that the firm began to launch and acquire journal titles at a frenzied pace. Elsevier’s model was Pergamon, the postwar science-publishing venture established by the brash Czech-born Robert Maxwell. By 1965, around the time that Garfield’s Science Citation Index first appeared, Pergamon was publishing 150 journals. Elsevier followed Maxwell’s lead, growing at a rate of 35 titles a year by the late 1970s. Both firms hiked their subscription prices aggressively, making huge profits off the prestige signaling of Garfield’s Journal Impact Factor. Maxwell sold Pergamon to Elsevier in 1991, months before his lurid death.

Elsevier was just getting started. The firm acquired The Lancet
the same year, when the company piloted what would become
ScienceDirect, its Web-based journal delivery platform. In 1993 the Dutch publisher merged with Reed International, a UK paper-maker turned media conglomerate. In 2015, the firm changed its name to RELX Group, after two decades of acquisitions, divestitures, and product launches—including Scopus in 2004, Elsevier’s answer to ISI’s Web of Science. The “shorter, more modern name,” RELX explained, is a nod to the company’s “transformation” from publisher to a “technology, content and analytics driven business.” RELX’s strategy? The “organic development of increasingly sophisticated information-based analytics and decisions tools”. Elsevier, in other words, was to become a surveillance publisher.

Since then, by acquisition and product launch, Elsevier has moved to make good on its self-description. By moving up and down the research lifecycle, the company has positioned itself to harvest behavioral surplus at every stage. Tracking lab results? Elsevier has Hivebench, acquired in 2016. Citation and data-sharing software? Mendeley, purchased in 2013. Posting your working paper or preprint? SSRN and Bepress, 2016 and 2017, respectively. Elsevier’s “solutions” for the post-publication phase of the scholarly workflow are anchored by Scopus and its 81 million records.

Curious about impact? Plum Analytics, an altmetrics company, acquired in 2017. Want to track your university’s researchers and their work? There’s the Pure “research information management system,” acquired in 2012. Measure researcher performance? SciVal, spun off from Scopus in 2009, which incorporates media monitoring service Newsflo, acquired in 2015.

Elsevier, to repurpose a computer science phrase, is now a fullstack publisher. Its products span the research lifecycle, from the lab bench through to impact scoring, and even—by way of Pure’s grant-searching tools—back to the bench, to begin anew. Some of its products are, you might say, services with benefits: Mendeley, for example, or even the ScienceDirect journal-delivery platform, provide reference management or journal access for customers and give off behavioral data to Elsevier. Products like SciVal and Pure, up the data chain, sell the processed data back to researchers and their employers, in the form of “research intelligence.”

It’s a good business for Elsevier. Facebook, Google, and Bytedance have to give away their consumer-facing services to attract data-producing users. If you’re not paying for it, the Silicon Valley adage has it, then you’re the product. For Elsevier and its peers, we’re the product and we’re paying (a lot) for it. Indeed, it’s likely that windfall subscription-and-APC profits in Elsevier’s “legacy” publishing business have financed its decade-long acquisition binge in analytics.

As Björn Brembs recently Tweeted:

“massive over-payment of academic publishers has enabled them to buy surveillance technology covering the entire workflow that can be used not only to be combined with our private data and sold, but also to make algorithmic (aka ‘evidenceled’) employment decisions.”

This is insult piled on injury: Fleece us once only to fleece us all over again, first in the library and then in the assessment office. Elsevier’s prediction products sort and process mined data in a variety of ways. The company touts what it calls its Fingerprint® Engine, which applies machine learning techniques to an ocean’s worth of scholarly texts—article abstracts, yes, but also patents, funding announcements, and proposals. Presumably trained on human-coded examples (scholar-designated article keywords?), the model assigns keywords (e.g., “Drug Resistance”) to documents, together with what amounts to a weighted score (e.g., 73%). The list of terms and scores is, the company says, a “Fingerprint.” The Engine is used in a variety of products, including Expert Lookup (to find reviewers), the company’s Journal Finder, and its Pure university-level research-management software. In the latter case, it’s scholars who get Fingerprinted:

“Pure applies semantic technology and 10 different research-specific keyword vocabularies to analyze a researcher’s publications and grant awards and transform them into a unique Fingerprint—a distinct visual index of concepts and a weighted list of structured terms.

But it’s not just Elsevier:

The machine learning techniques that Elsevier is using are of a piece with the RELX’s other predictive-analytics businesses aimed at corporate and legal customers, including LexisNexis Risk Solutions. Though RELX doesn’t provide specific revenue figures for its academic prediction products, the company’s 2020 SEC disclosures indicate that over a third of Elsevier’s revenue come from databases and electronic reference products–a business, the company states, in which “we continued to drive good growth through content development and enhanced machine learning and natural language processing based functionality”.

Many of Elsevier’s rivals appear to be rushing into the analytics
market, too, with a similar full research-stack data harvesting
strategy. Taylor & Francis, for example, is a unit of Informa, a UK-based conglomerate whose roots can be traced to Lloyd’s List, the eighteenth-century maritime-intelligence journal. In its 2020 annual report, the company wrote that it intends to “more deeply use and analyze the first party data” sitting in Taylor & Francis and other divisions, to “develop new services based on hard data and behavioral data insights.”

Last year Informa acquired the Faculty of 1000, together with its OA F1000Research publishing platform. Not to be outdone, Wiley bought Hindawi, a large independent OA publisher, along with its Phenom platform. The Hindawi purchase followed Wiley’s 2016 acquisition of Atypon, a researcher-facing software firm whose online platform, Literatum, Wiley recently adopted across its journal portfolio. “Know thy reader,” Atypon writes of Literatum. “Construct reports on the fly and get visualization of content usage and users’ site behavior in real time.” Springer Nature, to cite a third example, sits under the same Holtzbrink corporate umbrella as Digital Science, which incubates startups and launches products across the research lifecycle, including the Web of Science/Scopus competitor Dimensions, data repository Figshare, impact tracker Altmetric, and many others.

So, the definition of ‘diamond open access‘ should include: no surveillance.


Compositionality — First Issue

30 December, 2019


Compositionality

Yay! The first volume of Compositionality has been published! You can read it here:

https://compositionality-journal.org

“Compositionality” is about how complex things can be assembled out of simpler parts. Compositionality is a journal for research using compositional ideas, most notably of a category-theoretic origin, in any discipline. Example areas include but are not limited to: computation, logic, physics, chemistry, engineering, linguistics, and cognition.

Compositionality is a diamond open access journal. That means it’s free to publish in and free to read.

The executive board consists of Brendan Fong, Nina Otter and Joshua Tan. I thank them for all their work making this dream a reality!

The coordinating editors are Aleks Kissinger and Joachim Kock. The steering board consists of John Baez, Bob Coecke, Kathryn Hess, Steve Lack and Valeria de Paiva.

The editors are:

Corina Cirstea, University of Southampton, UK
Ross Duncan, University of Strathclyde, UK
Andrée Ehresmann, University of Picardie Jules Verne, France
Tobias Fritz, Perimeter Institute, Canada
Neil Ghani, University of Strathclyde, UK
Dan Ghica, University of Birmingham, UK
Jeremy Gibbons, University of Oxford, UK
Nick Gurski, Case Western Reserve University, USA
Helle Hvid Hansen, Delft University of Technology, Netherlands
Chris Heunen, University of Edinburgh, UK
Martha Lewis, University of Amsterdam, Netherlands
Samuel Mimram, École Polytechnique, France
Simona Paoli, University of Leicester, UK
Dusko Pavlovic, University of Hawaii, USA
Christian Retoré, Université de Montpellier, France
Mehrnoosh Sadrzadeh, Queen Mary University, UK
Peter Selinger, Dalhousie University, Canada
Pawel Sobocinski, University of Southampton, UK
David Spivak, MIT, USA
Jamie Vicary, University of Birmingham and University of Oxford, UK
Simon Willerton, University of Sheffield, UK


The Selected Papers Network (Part 4)

29 July, 2013

guest post by Christopher Lee

In my last post, I outlined four aspects of walled gardens that make them very resistant to escape:

• walled gardens make individual choice irrelevant, by transferring control to the owner, and tying your one remaining option (to leave the container) to being locked out of your professional ecosystem;

• all competition is walled garden;

• walled garden competition is winner-take-all;

• even if the “good guys” win (build the biggest walled garden), they become “bad guys” (masters of the walled garden, whose interests become diametrically opposed to that of the people stuck in their walled garden).

To state the obvious: even if someone launched a new site with the perfect interface and features for an alternative system of peer review, it would probably starve to death both for lack of users and lack of impact. Even for the rare user who found the site and switched all his activity to it, he would have little or no impact because almost no one would see his reviews or papers. Indeed, even if the Open Science community launched dozens of sites exploring various useful new approaches for scientific communication, that might make Open Science’s prospects worse rather than better. Since each of these sites would in effect be a little walled garden (for reasons I outlined last time), their number and diversity would mainly serve to fragment the community (i.e. the membership and activity on each such site might be ten times less than it would have been if there were only a few such sites). When your strengths (diversity; lots of new ideas) act as weaknesses, you need a new strategy.

SelectedPapers.net is an attempt to an offer such a new strategy. It represents only about two weeks of development work by one person (me), and has only been up for about a month, so it can hardly be considered the last word in the manifold possibilities of this new strategy. However, this bare bones prototype demonstrates how we can solve the four ‘walled garden dilemmas’:

Enable walled-garden users to ‘levitate’—be ‘in’ the walled garden but ‘above’ it at the same time. There’s nothing mystical about this. Think about it: that’s what search engines do all the time—a search engine pulls material out of all the worlds’ walled gardens, and gives it a new life by unifying it based on what it’s about. All selectedpapers.net does is act as a search engine that indexes content by what paper and what topics it’s about, and who wrote it.

This enables isolated posts by different people to come together in a unified conversation about a specific paper (or topic), independent of what walled gardens they came from—while simultaneously carrying on their full, normal life in their original walled garden.

Concretely, rather than telling Google+ users (for example) they should stop posting on Google+ and post only on selectedpapers.net instead (which would make their initial audience plunge to near zero), we tell them to add a few tags to their Google+ post so selectedpapers.net can easily index it. They retain their full Google+ audience, but they acquire a whole new set of potential interactions and audience (trivial example: if they post on a given paper, selectedpapers.net will display their post next to other people’s posts on the same paper, resulting in all sorts of possible crosstalk).

Some people have expressed concern that selectedpapers.net indexes Google+, rightly pointing out that Google+ is yet another walled garden. Doesn’t that undercut our strategy to escape from walled gardens? No. Our strategy is not to try to find a container that is not a walled garden; our strategy is to ‘levitate’ content from walled gardens. Google+ may be a walled garden in some respects, but it allows us to index users’ content, which is all we need.

It should be equally obvious that selectedpapers.net should not limit itself to Google+. Indeed, why should a search engine restrict itself to anything less than the whole world? Of course, there’s a spectrum of different levels of technical challenges for doing this. And this tends to produce an 80-20 rule, where 80% of the value can be attained by only 20% of the work. Social networks like Google+, Twitter etc. provide a large portion of the value (potential users), for very little effort—they provide open APIs that let us search their indexes, very easily. Blogs represent another valuable category for indexing.

More to the point, far more important than technology is building a culture where users expect their content to ‘fly’ unrestricted by walled-garden boundaries, and adopt shared practices that make that happen easily and naturally. Tagging is a simple example of that. By putting the key metadata (paper ID, topic ID) into the user’s public content, in a simple, standard way (as opposed to hidden in the walled garden’s proprietary database), tagging makes it easy for anyone and everyone to index it. And the more users get accustomed to the freedom and benefits this provides, the less willing they’ll be to accept walled gardens’ trying to take ownership (ie. control) of the users’ own content.

Don’t compete; cooperate: if we admit that it will be extremely difficult for a small new site (like selectedpapers.net) to compete with the big walled gardens that surround it, you might rightly ask, what options are left? Obviously, not to compete. But concretely, what would that mean?

☆ enable users in a walled garden to liberate their own content by tagging and indexing it;

☆ add value for those users (e.g. for mathematicians, give them LaTeX equation support);

☆ use the walled garden’s public channel as your network transport—i.e. build your community within and through the walled garden’s community.

This strategy treats the walled garden not as a competitor (to kill or be killed by) but instead as a partner (that provides value to you, and that you in turn add value to). Morever, since this cooperation is designed to be open and universal rather than an exclusive partnership (concretely, anyone could index selectedpapers.net posts, because they are public), we can best describe this as public data federation.

Any number of sites could cooperate in this way, simply by:

☆ sharing a common culture of standard tagging conventions;

☆ treating public data (i.e. viewable by anybody on the web) as public (i.e. indexable by anybody);

☆ drawing on the shared index of global content (i.e. when the index has content that’s relevant to your site’s users, let them see and interact with it).

To anyone used to the traditional challenges of software interoperability, this might seem like a tall order—it might take years of software development to build such a data federation. But consider: by using Google+’s open API, selectedpapers.net has de facto established such a data federation with Google+, one of the biggest players in the business. Following the checklist:

☆ selectedpapers.net offers a very simple tagging standard, and more and more Google+ users are trying it;

☆ Google+ provides the API that enables public posts to be searched and indexed. Selectedpapers.net in turn assures that posts made on selectedpapers.net are visible to Google+ by simply posting them on Google+;

☆ Selectedpapers.net users can see posts from (and have discussions with) Google+ users who have never logged into (or even heard of) selectedpapers.net, and vice versa.

Now consider: what if someone set up their own site based on the open source selectedpapers.net code (or even wrote their own implementation of our protocol from scratch). What would they need to do to ensure 100% interoperability (i.e. our three federation requirements above) with selectedpapers.net? Nothing. That federation interoperability is built into the protocol design itself. And since this is federation, that also means they’d have 100% interoperation with Google+ as well. We can easily do so also with Twitter, WordPress, and other public networks.

There are lots of relevant websites in this space. Which of them can we actually federate with in this way? This divides into two classes: those that have open APIs vs. those that don’t. If a walled garden has an API, you can typically federate with it simply by writing some code to use their API, and encouraging its users to start tagging. Everybody wins: the users gain new capabilities for free, and you’ve added value to that walled garden’s platform. For sites that lack such an API (typically smaller sites), you need more active cooperation to establish a data exchange protocol. For example, we are just starting discussions with arXiv and MathOverflow about such ‘federation’ data exchange.

To my mind, the most crucial aspect of this is sincerity: we truly wish to cooperate with (add value to) all these walled garden sites, not to compete with them (harm them). This isn’t some insidious commie plot to infiltrate and somehow destroy them. The bottom line is that websites will only join a federation if it benefits them, by making their site more useful and more attractive to users. Re-connecting with the rest of the world (in other walled gardens) accomplishes that in a very fundamental way. The only scenario I see where this would not seem advantageous, would be for a site that truly believes that it is going to achieve market dominance across this whole space (‘one walled garden to rule them all’). Looking over the landscape of players (big players like Google, Twitter, LinkedIn, Facebook, vs. little players focused on this space like Mendeley, ResearchGate, etc.), I don’t think any of the latter can claim this is a realistic plan—especially when you consider that any success in that direction will just make all other players federate together in self-defense.

Level the playing field: these considerations lead naturally to our third concern about walled gardens: walled garden competition strongly penalizes new, small players, and makes bigger players assume a winner-takes-all outcome. Concretely, selectedpapers.net (or any other new site) is puny compared with, say, Mendeley. However, the federation strategy allows us to turn that on its head. Mendeley is puny compared with Google+, and selectedpapers.net operates in de facto federation with Google+. How likely is it that Mendeley is going to crush Google+ as a social network where people discuss science? If a selectedpapers.net user could only post to other selectedpapers.net members (a small audience), then Mendeley wins by default. But that’s not how it works: a selectedpapers.net user has all of Google+ as his potential audience. In a federation strategy, the question isn’t how big you are, but rather how big your federation is. And in this day of open APIs, it is really easy to extend that de facto federation across a big fraction of the world’s social networks. And that is level playing field.

Provide no point of control: our last concern about walled gardens was that they inevitably create a divergence of interests for the winning garden’s owner vs. the users trapped inside. Hence the best of intentions (great ideas for building a wonderful community) can truly become the road to hell—an even better walled garden. After all, that’s how the current walled garden system evolved (from the reasonable and beneficial idea of establishing journals). If any one site ‘wins’, our troubles will just start all over again. Is there any alternative?

Yes: don’t let any one site win; only build a successful federation. Since user data can flow freely throughout the federation, users can move freely within the federation, without losing their content, accumulated contacts and reputation, in short, their professional ecosystem. If a successful site starts making policies that are detrimental to users, they can easily vote with their feet. The data federation re-establishes the basis for a free market, namely unconstrained individual freedom of choice.

The key is that there is no central point of control. No one ‘owns’ (i.e. controls) the data. It will be stored in many places. No one can decide to start denying it to someone else. Anyone can access the public data under the rules of the federation. Even if multiple major players conspired together, anyone else could set up an alternative site and appeal to users: vote with your feet! As we know from history, the problem with senates and other central control mechanisms is that given enough time and resources, they can be corrupted and captured by both elites and dictators. Only a federation system with no central point of control has a basic defense: regardless of what happens at ‘the top’, all individuals in the system have freedom of choice between many alternatives, and anybody can start a new alternative at any time. Indeed, the key red flag in any such system is when the powers-that-be start pushing all sorts of new rules that hinder people from starting new alternatives, or freely migrating to alternatives.

Note that implicit in this is an assertion that a healthy ecosystem should contain many diverse alternative sites that serve different subcommunities, united in a public data federation. I am not advocating that selectedpapers.net should become the ‘one paper index to rule them all’. Instead, I’m saying we need one successful exemplar of a federated system, that can help people see how to move their content beyond the walled garden and start ‘voting with their feet’.

So: how do we get there? In my view, we need to use selectedpapers.net to prove the viability of the federation model in two ways:

☆ we need to develop the selectedpapers.net interface to be a genuinely good way to discuss scientific papers, and subscribe to others’ recommendations. It goes without saying that the current interface needs lots of improvements, e.g. to work past some of Google+’s shortcomings. Given that the current interface took only a couple of weeks of hacking by just one developer (yours truly), this is eminently doable.

☆ we need to show that selectedpapers.net is not just a prisoner of Google+, but actually an open federation system, by adding other systems to the federation, such as Twitter and independent blogs. Again, this is straightforward.

To Be or Not To Be?

All of which brings us to the real question that will determine our fates. Are you for a public data federation, or not? In my
view, if you seriously want reform of the current walled garden
system, federation is the only path forward that is actually a path forward (instead of to just another walled garden). It is the only strategy that allows the community to retain control over its own content. That is fundamental.

And if you do want a public data federation, are you willing to
work for that outcome? If not, then I think you don’t really want it—because you can contribute very easily. Even just adding #spnetwork tags to your posts—wherever you write them—is a very valuable contribution that enormously increases the value of the federation ecosystem.

One more key question: who will join me in developing the
selectedpapers.net platform (both the software, and federation alliances)? As long as selectedpapers.net is a one-man effort, it must fail. We don’t need a big team, but it’s time to turn the project into a real team. The project has solid foundations that will enable rapid development of new federation partnerships—e.g. exciting, open APIs like REST — and of seamless, intuitive user interfaces — such as the MongoDB noSQL database, and AJAX methods. A small, collaborative team will be able to push this system forward quickly in exciting, useful ways. If you jump in now, you can be one of the very first people on the team.

I want to make one more appeal. Whatever you think about
selectedpapers.net as it exists today, forget about it.

Why? Because it’s irrelevant to the decision we need to make today: public data federation, yes or no? First, because the many flaws of the current selectedpapers.net have almost no bearing on that critical question (they mainly reflect the limitations of a version 0.1 alpha product). Second, because the whole point of federation is to ‘let a thousand flowers bloom’— to enable a diverse ecology of different tools and interfaces, made viable because they work together as a federation, rather than starving to death as separate, warring, walled gardens.

Of course, to get to that diverse, federated ecosystem, we first
have to prove that one federated system can succeed—and
liberate a bunch of minds in the process, starting with our own. We have to assemble a nucleus of users who are committed to making this idea succeed by using it, and a team of developers who are driven to build it. Remember, talking about the federation ideal will not by itself accomplish anything. We have to act, now; specifically, we have to quickly build a system that lets more and more people see the direct benefits of public data federation. If and when that is clearly successful, and growing sustainably, we can consider branching out, but not before.

For better or worse, in a world of walled gardens, selectedpapers.net is the one effort (in my limited knowledge) to do exactly that. It may be ugly, and annoying, and alpha, but it offers people a new and different kind of social contract than the walled gardens. (If someone can point me to an equivalent effort to implement the same public data federation strategy, we will of course be delighted to work with them! That’s what federation means).

The question now for the development of public data federation is whether we are working together to make it happen, or on the contrary whether we are fragmenting and diffusing our effort. I believe that public data federation is the Manhattan Project of the war for Open Science. It really could change the world in a fundamental and enduring way. Right now the world may seem headed the opposite direction (higher and higher walls), but it does not have to be that way. I believe that all of the required ingredients are demonstrably available and ready to go. The only remaining requirement is that we rise as a community and do it.

I am speaking to you, as one person to another. You as an individual do not even have the figleaf of saying “Well, if I do this, what’s the point? One person can’t have any impact.” You as an individual can change this project. You as an individual can change the world around you through what you do on this project.


The Selected Papers Network (Part 3)

12 July, 2013

guest post by Christopher Lee

A long time ago in a galaxy far, far away, scientists (and mathematicians) simply wrote letters to each other to discuss their findings.

In cultured cities, they formed clubs for the same purpose; at club meetings, particularly juicy letters might be read out in their entirety. Everything was informal (bureaucracy to-science ratio around zero), individual (each person spoke only for themselves, and made up their own mind), and direct (when Pierre wrote to Johan, or Nikolai to Karl, no one yelled “Stop! It has not yet been blessed by a Journal!”).

To use my nomenclature, it was a selected-papers network. And it worked brilliantly for hundreds of years, despite wars, plagues and severe network latency (ping times of 109 msec).

Even work we consider “modern” was conducted this way, almost to the twentieth century: for example, Darwin’s work on evolution by natural selection was “published” in 1858, by his friends arranging a reading of it at a meeting of the Linnean Society. From this point of view, it’s the current journal system that’s a historical anomaly, and a very recent one at that.

I’ll spare you an essay on the problems of the current system. Instead I want to focus on the practical question of how to change the system. The nub of the question is a conundrum: how is it, that just as the Internet is reducing publication and distribution costs to zero, Elsevier, the Nature group and other companies have been aggressively raising subscription prices (for us to read our own articles!), in many cases to extortionate levels?

That publishing companies would seek to outlaw Open Access rules via cynical legislation like the “Research Works” Act goes without saying; that they could blithely expect the market to buy a total divorce of price vs. value reveals a special kind of economic illogic.

That illogic has a name: the Walled Garden—and it is the immovable object we are up against. Any effort we make must be informed by careful study of what makes its iniquities so robust.

I’ll start by reviewing some obvious but important points.

A walled garden is an empty container that people are encouraged to fill with their precious content—at which point it stops being “theirs”, and becomes the effective property of whoever controls the container. The key word is control. When Pierre wrote a letter to Johan, the idea that they must pay some ignoramus $40 for the privilege would have been laughable, because there was no practical way for a third party to control that process. But when you put the same text in a journal, it gains control: it can block Pierre’s letter for any reason (or no reason); and it can lock out Johan (or any other reader) unless he pays whatever price it demands.

Some people might say this is just the “free market” at work—but that is a gross misunderstanding of the walled garden concept. Unless you can point to exactly how the “walls” lock people in, you don’t really understand it. For an author, a free market would be multiple journals competing to consider his paper (just as multiple papers compete for acceptance by a journal). This would be perfectly practical (they could all share the same set of 2-3 referee reports), but that’s not how journals decided to do it. For a reader or librarian, a free market would be multiple journals competing to deliver the same content (same articles): you choose the distributor that provides the best price and service.

Journals simply agree not to compete, by inserting a universal “non-compete clause” in their contract; not only are authors forced to give exclusive rights to one journal, they are not even permitted to seek multiple bids (let more than one journal at a time see the paper). The whole purpose of the walled garden is to eliminate the free market.

Do you want to reform some of the problems of the current system? Then you had better come to grips with the following walled garden principles:

• Walled gardens make individual choice irrelevant, by transferring control to the owner, and tying your one remaining option (to leave the container) to being locked out of your professional ecosystem.

• All the competition are walled gardens.

• Walled garden competition is winner-take-all.

• Even if the “good guys” win and become the biggest walled garden, they become “bad guys”: masters of the walled garden, whose interests become diametrically opposed to those of the people stuck in their walled garden.

To make these ideas concrete, let’s see how they apply to any
reform effort such as selectedpapers.net.

Walled gardens make individual choice irrelevant

Say somebody starts a website dedicated to such a reform effort, and you decide to contribute a review of an interesting paper. But such a brand-new site by definition has zero fraction of the relevant audience.

Question: what’s the point of writing a review, if it affects nothing and no one will read it? There is no point. Note that if you still choose to make that effort, this will achieve nothing. Individuals choosing to exile themselves from their professional ecosystem have no effect on the Walled Garden. Only a move of the whole ecosystem (a majority) would affect it.

Note this is dramatically different from a free market: even if I, a tiny flea, buy shares of the biggest, most traded company (AAPL, say), on the world’s biggest stock exchange, I immediately see AAPL’s price rise (a tiny bit) in response; when I sell, the price immediately falls in response. A free market is exquisitely sensitive to an individual’s decisions.

This is not an academic question. Many, many people have already tried to start websites with similar “reform” goals as selectedpapers.net. Unfortunately, none of them are gaining traction, for the same reasons that Diaspora has zero chance to beat Facebook.

(If you want to look at one of the early leaders, “open source”, and backed by none other than the Nature Publishing Group, check out Connotea.org. Or on the flip side, consider the fate of Mendeley.)

For years after writing the Selected-Papers Network paper, I held off from doing anything, because at that time I could not see any path for solving this practical problem.

All the competition are walled gardens

In the physical world, walls do not build themselves, and they have a distressing (or pleasing!) tendency to fall down. In the digital world, by contrast, walls are not the exception but the rule.

A walled garden is simply any container whose data do not automatically interoperate with and in the outside world. Since it takes very special design to achieve any interoperability at all, nearly all websites are walled gardens by default.

More to the point, if websites A and B are competing with each other, is website A going to give B its crown jewels (its users and data)? No, it’s going to build the walls higher. Note that even if a website is open source (anyone can grab its code and start their own site), it’s still a walled garden because its users and their precious data are only stored in its site, and cannot get out.

The significance of this for us is that essentially every “reform” solution being pushed at us, from Mendeley on out to idealistic open source sites, is unfortunately in practice a walled garden. And that means users won’t own their own content (in the crucial sense of control); the walled garden will.

Walled garden competition is winner-take-all

All this is made worse by the fact that walled garden competition has a strong tendency towards monopoly. It rewards consolidation and punishes small market players. In social networks, size matters. When a little walled garden tries to compete with a big walled garden, all advantages powerfully aid the big incumbent, even if the little one offers great new features. The whole mechanism of individuals “voting with their feet” can’t operate when the only choice available to them is to jump off a cliff: that is, leave the ecosystem where everyone else is.

Even if you win the walled garden war, the community will lose

Walled gardens intrinsically create a divergence of interests between their owners vs. their users. By giving the owner control and locking in the users, it gives the owner a powerful incentive to expand and exploit his control, at the expense of users, with very little recourse for them. For example, I think my own motivations for starting selectedpapers.net are reasonably pure, but if—for the purpose of argument—it were to grow to dominate mathematics, I still don’t think you should let me (or anyone else) own it as a walled garden.

First of all, you probably won’t agree with many of my decisions; second, if Elsevier offers me $100 million, how can you know I won’t just sell you out? That’s what the founders of Mendeley just did. Note this argument applies not just to individuals, but even to the duly elected representatives of your own professional societies. For example, in biology some professional societies have been among the most reactionary in fighting Open Access—because they make most of their money from “their” journals. Because they own a walled garden, their interests align with Elsevier, not with their own members.

Actually that’s the whole story of how we got in this mess in the first place. The journal system was started by good people with good intentions, as the “Proceedings” of their club meetings. But because it introduced a mechanism of control, it became a walled garden, with inevitable consequences. If we devote our efforts to a solution that in practice becomes a walled garden, the consequences will again be inevitable.

Why am I dwelling on all these negatives? Let’s not kid ourselves: this is a hard problem, and we are by no means the first to try to crack it. Most of the doors in this prison have already been tried by smart, hard-working people, and they did not lead out. Obviously I don’t believe there’s no way out, or I wouldn’t have started selectedpapers.net. But I do believe we all need to absorb these lessons, if we’re to have any chance of real success.

Roll these principles over in your mind; wargame the possible pathways for reform and note where they collide with one of these principles. Can you find a reliable way out?

In my next post I’ll offer my own analysis of where I think the weak link is. But I am very curious to hear what you come up with.


The Selected Papers Network (Part 2)

14 June, 2013

Last time Christopher Lee and I described some problems with scholarly publishing. The big problems are expensive journals and ineffective peer review. But we argued that solving these problems require new methods of

selection—assessing papers

and

endorsement—making the quality of papers known, thus giving scholars the prestige they need to get jobs and promotions.

The Selected Papers Network is an infrastructure for doing both these jobs in an open, distributed way. It’s not yet the solution to the big visible problems—just a framework upon which we can build those solutions. It’s just getting started, and it can use your help.

But before I talk about where all this is heading, and how you can help, let me say what exists now.

This is a bit dangerous, because if you’re not sure what a framework is for, and it’s not fully built yet, it can be confusing to see what’s been built so far! But if you’ve thought about the problems of scholarly publishing, you’re probably sick of hearing about dreams and hopes. You probably want to know what we’ve done so far. So let me start there.

SelectedPapers.net as it stands today

SelectedPapers.net lets you recommend papers, comment on them, discuss them, or simply add them to your reading list.

But instead of “locking up” your comments within its own website—the “walled garden” strategy followed by many other services—it explicitly shares these data in a way that people not on SelectedPapers.net can easily see. Any other service can see and use them too. It does this by using existing social networks—so that users of those social networks can see your recommendations and discuss them, even if they’ve never heard of SelectedPapers.net!

The idea is simple. You add some hashtags to let SelectedPapers.net know you’re talking to it, and to let it know which paper you’re talking about. It notices these hashtags and copies your comments over to its publicly accessible database.

So far Christopher Lee has got it working on Google+. So right now, if you’re a Google+ user, you can post comments on SelectedPapers.net using your usual Google+ identity and posting process, just by including suitable hashtags. Your post will be seen by your usual audience—but also by people visiting the SelectedPapers.net website, who don’t use Google+.

If you want to strip the idea down to one sentence, it’s this:

Given that social networks already exist, all we need for truly open scientific communication is a convention on a consistent set of tags and IDs for discussing papers.

That makes it possible to integrate discussion from all social networks—big and small—as a single unified forum. It’s a federated approach, rather than a single isolated website. And it won’t rely on any one social network: after Google+, we can get it working for Twitter and other networks and forums.

But more about the theory later. How, exactly, do you use it?

Getting Started

To see how it works, take a look here:

https://selectedpapers.net

Under ‘Recent activity’ you’ll see comments and recommendations of different papers, so far mostly on the arXiv.

Support for other social networks such as Twitter is coming soon. But here’s how you can use it now, if you’re a member of Google+:

• We suggest that you first create (in your Google+ account) a Google+ Circle specifically for discussing research with (e.g. call it “Research”). If you already have such a circle, or circles, you can just use those.

• Click Sign in with Google on https://selectedpapers.net or on a paper discussion page.

• The usual Google sign-in window will appear (unless you are already signed in). Google will ask if you want to use the Selected Papers network, and specifically for what Circle(s) to let it see the membership list(s) (i.e. the names of people you have added to that Circle). SelectedPapers.net uses this as your initial “subscriptions”, i.e. the list of people whose recommendations you want to receive. We suggest you limit this to your “Research” circle, or whatever Circle(s) of yours fit this purpose.

Note the only information you are giving SelectedPapers.net access to is this list of names; in all other respects SelectedPapers.net is limited by Google+ to the same information that anyone on the internet can see, i.e. your public posts. For example, SelectedPapers.net cannot ever see your private posts within any of your Circles.

• Now you can initiate and join discussions of papers directly on any SelectedPapers.net page.

• Alternatively, without even signing in to SelectedPapers.net, you can just write posts on Google+ containing the hashtag #spnetwork, and they will automatically be included within the SelectedPapers.net discussions (i.e. indexed and displayed so that other people can reply to them etc.). Here’s an example of a Google+ post example:

This article by Perelman outlines a proof of the Poincare conjecture!

#spnetwork #mustread #geometry #poincareConjecture arXiv:math/0211159

You need the tag #spnetwork for SelectedPapers.net to notice your post. Tags like #mustread, #recommend, and so on indicate your attitude to a paper. Tags like #geometry, #poincareConjecture and so on indicate a subject area: they let people search for papers by subject. A tag of the form arXiv:math/0211159 is necessary for arXiv papers; note that this does not include a # symbol.

For PubMed papers, include a tag of the form PMID:22291635. Other published papers usually have a DOI (digital object identifier), so for those include a tag of the form doi:10.3389/fncom.2012.00001.

Tags are the backbone of SelectedPapers.net; you can read more about them here.

• You can also post and see comments at https://selectedpapers.net. This page also lets you search for papers in the arXiv and search for published papers via their DOI or Pubmed ID. If you are signed in, the homepage will also show the latest recommendations (from people you’re subscribed to), papers on your reading list, and papers you tagged as interesting for your work.

Papers

Papers are the center of just about everything on the selected papers network. Here’s what you can currently do with a paper:

• click to see the full text of the paper via the arXiv or the publisher’s website.

• read other people’s recommendations and discussion of the paper.

• add it to your Reading List. This is simply a private list of papers—a convenient way of marking a paper for further attention later. When you are logged in, your Reading list is shown on the homepage. No one else can see your reading list.

• share the paper with others (such as your Google+ Circles or Google+ communities that you are part of).

• tag it as interesting for a specific topic. You do this either by clicking the checkbox of a topic (it shows topics that other readers have tagged the paper), by selecting from a list of topics that you have previously tagged as interesting to you, or by simply typing a tag name. These tags are public; that is, everyone can see what topics the paper has been tagged with, and who tagged them.

• post a question or comment about the paper, or reply to what other people have said about it. This traffic is public. Specifically, clicking the Discuss this Paper button gives you a Google+ window (with appropriate tags already filled in) for writing a post. Note that in order for the spnet to see your post, you must include Public in the list of recipients for your post (this is an inherent limitation of Google+, which limits apps to see only the same posts that any internet user would see – even when you are signed-in to the app as yourself on Google+).

• recommend it to others. Once again, you must include Public in the list of recipients for your post, or the spnet cannot see it.

We strongly suggest that you include a topic hashtag for your research interest area. For example, if there is a hashtag that people in your field commonly use for posting on Twitter, use it. If you have to make up a new hashtag, keep it intuitive and follow “camelCase” capitalization e.g. #openPeerReview.

Open design

Note that thanks to our open design, you do not even need to create a SelectedPapers.net login. Instead, SelectedPapers.net authenticates with Google (for example) that you are signed in to Google+; you never give SelectedPapers.net your Google password or access to any confidential information.

Moreover, even when you are signed in to SelectedPapers.net using your Google sign-in, it cannot see any of your private posts, only those you posted publicly—in other words, exactly the same as what anybody on the Internet can see.

What to do next?

We really need some people to start using SelectedPapers.net and start giving us bug reports. The place to do that is here:

https://github.com/cjlee112/spnet/issues

or if that’s too difficult for some reason, you can just leave a comment on this blog entry.

We could also use people who can write software to improve and expand the system. I can think of fifty ways the setup could be improved: but as usual with open-source software, what matters most is not what you suggest, but what you’re willing to do.

Next, let mention three things we could do in the longer term. But I want to emphasize that these are just a few of many things that can be done in the ecosystem created by a selected papers network. We don’t need to all do the same thing, since it’s an open, federated system.

Overlay journals. A journal doesn’t need to do distribution and archiving of papers anymore: the arXiv or PubMed can do that. A journal can focus on the crucial work of selection and endorsement—it can just point to a paper on the arXiv or PubMed, and say “this paper is published”. Such journals, called overlay journals, are already being contemplated—see for example Tim Gowers’ post. But they should work better in the ecosystem created by a selected papers network.

Review boards. Publication doesn’t need to be a monogamous relation between a journal and an author. We could also have prestigious ‘review boards’ like the Harvard Genomics Board or the Institute of Network Science who pick, every so often, what they consider to be best papers in their chosen area. In their CVs, scholars could then say things like “this paper was chosen as one of the Top Ten Papers in Topology in 2015 by the International Topology Review Board”. Of course, boards would become prestigious in the usual recursive way: by having prestigious members, being associated with prestigious institutions, and correctly choosing good papers to bestow prestige upon. But all this could be done quite cheaply.

Open peer review. Last time, we listed lots of problems with how journals referee papers. Open peer review is a way to solve these problems. I’ll say more about it next time. For now, go here:

• Christopher Lee, Open peer review by a selected-papers network, Frontiers of Computational Neuroscience 6 (2012).

A federated system

After reading this, you may be tempted to ask: “Doesn’t website X already do most of this? Why bother starting another?”

Here’s the answer: our approach is different because it is federated. What does that mean? Here’s the test: if somebody else were to write their own implementation of the SelectedPapers.net protocol and run it on their own website, would data entered by users of that site show up automatically on selectedpapers.net, and vice versa? The answer is yes, because the protocol transports its data on open, public networks, so the same mechanism that allows selectedpapers.net to read its users’ messages would work for anyone else. Note that no special communications between the new site and SelectedPapers.net would be required; it is just federated by design!

One more little website is not going to solve the problems with journals. The last thing anybody wants is another password to remember! There are already various sites trying to solve different pieces of the problem, but none of them are really getting traction. One reason is that the different sites can’t or won’t talk to each other—that is, federate. They are walled gardens, closed ecosystems. As a result, progress has been stalled for years.

And frankly, even if some walled garden did eventually eventually win out, that wouldn’t solve the problem of expensive journals. If one party became able to control the flow of scholarly information, they’d eventually exploit this just as the journals do now.

So, we need a federated system, to make scholarly communication openly accessible not just for scholars but for everyone—and to keep it that way.


The Selected Papers Network (Part 1)

7 June, 2013

Christopher Lee has developed some new software called the Selected Papers Network. I want to explain that and invite you all to try using it! But first, in this article, I want to review the problems it’s trying to address.

There are lots of problems with scholarly publishing, and of course even more with academia as a whole. But I think Chris and I are focused on two: expensive journals, and ineffective peer review.

Expensive Journals

Our current method of publication has some big problems. For one thing, the academic community has allowed middlemen to take over the process of publication. We, the academic community, do most of the really tricky work. In particular, we write the papers and referee them. But they, they publishers, get almost all the money, and charge our libraries for it—more and more, thanks to their monopoly power. It’s an amazing business model:

Get smart people to work for free, then sell what they make back to them at high prices.

People outside academia have trouble understanding how this continues! To understand it, we need to think about what scholarly publishing and libraries actually achieve. In short:

1. Distribution. The results of scholarly work get distributed in publicly accessible form.

2. Archiving. The results, once distributed, are safely preserved.

3. Selection. The quality of the results is assessed, e.g. by refereeing.

4. Endorsement. The quality of the results is made known, giving the scholars the prestige they need to get jobs and promotions.

Thanks to the internet, jobs 1 and 2 have become much easier. Anyone can put anything on a website, and work can be safely preserved at sites like the arXiv and PubMed Central. All this is either cheap or already supported by government funds. We don’t need journals for this.

The journals still do jobs 3 and 4. These are the jobs that academia still needs to find new ways to do, to bring down the price of journals or make them entirely obsolete.

The big commercial publishers like to emphasize how they do job 3: selection. The editors contact the referees, remind them to deliver their referee reports, and communicate these reports to the authors, while maintaining the anonymity of the referees. This takes work.

However, this work can be done much more cheaply than you’d think from the prices of journals run by the big commercial publishers. We know this from the existence of good journals that charge much less. And we know it from the shockingly high profit margins of the big publishers, particularly Elsevier.

It’s clear that the big commercial publishers are using their monopoly power to charge outrageous prices for their products. Why do they continue to get away with this? Why don’t academics rebel and publish in cheaper journals?

One reason is a broken feedback loop. The academics don’t pay for journals out of their own pocket. Instead, their university library pays for the journals. Rising journal costs do hurt the academics: money goes into paying for journals that could be spent in other ways. But most of them don’t notice this.

The other reason is item 4: endorsement. This is the part of academic publishing that outsiders don’t understand. Academics want to get jobs and promotions. To do this, we need to prove that we’re ‘good’. But academia is so specialized that our colleagues are unable to tell how good our papers are. Not by actually reading them, anyway! So, they try to tell by indirect methods—and a very important one is the prestige of the journals we publish in.

The big commercial publishers have bought most of the prestigious journals. We can start new journals, and some of us are already doing that, but it takes time for these journals to become prestigious. In the meantime, most scholars prefer to publish in prestigious journals owned by the big publishers, even if this slowly drives their own libraries bankrupt. This is not because these scholars are dumb. It’s because a successful career in academia requires the constant accumulation of prestige.

The Elsevier boycott shows that more and more academics understand this trap and hate it. But hating a trap is not enough to escape the trap.

Boycotting Elsevier and other monopolistic publishers is a good thing. The arXiv and PubMed Central are good things, because they show that we can solve the distribution and archiving problems without the help of big commercial publishers. But we need to develop methods of scholarly publishing that solve the selection and endorsement problems in ways that can’t be captured by the big commercial publishers.

I emphasize ‘can’t be captured’, because these publishers won’t go down without a fight. Anything that works well, they will try to buy—and then they will try to extract a stream of revenue from it.

Ineffective Peer Review

While I am mostly concerned with how the big commercial publishers are driving libraries bankrupt, my friend Christopher Lee is more concerned with the failures of the current peer review system. He does a lot of innovative work on bioinformatics and genomics. This gives him a different perspective than me. So, let me just quote the list of problems from this paper:

• Christopher Lee, Open peer review by a selected-papers network, Frontiers of Computational Neuroscience 6 (2012).

The rest of this section is a quote:

Expert peer review (EPR) does not work for interdisciplinary peer review (IDPR). EPR means the assumption that the reviewer is expert in all aspects of the paper, and thus can evaluate both its impact and validity, and can evaluate the paper prior to obtaining answers from the authors or other referees. IDPR means the situation where at least one part of the paper lies outside the reviewer’s expertise. Since journals universally assume EPR, this creates artificially high barriers to innovative papers that combine two fields [Lee, 2006]—-one of the most valuable sources of new discoveries.

Shoot first and ask questions later means the reviewer is expected to state a REJECT/ACCEPT position before getting answers from the authors or other referees on questions that lie outside the reviewer’s expertise.

No synthesis: if review of a paper requires synthesis—combining the different expertise of the authors and reviewers in order to determine what assumptions and criteria are valid for evaluating it—both of the previous assumptions can fail badly [Lee, 2006].

Journals provide no tools for finding the right audience for an innovative paper. A paper that introduces a new combination of fields or ideas has an audience search problem: it must search multiple fields for people who can appreciate that new combination. Whereas a journal is like a TV channel (a large, pre-defined audience for a standard topic), such a paper needs something more like Google—a way of quickly searching multiple audiences to find the subset of people who can understand its value.

Each paper’s impact is pre-determined rather than post-evaluated: By ‘pre-determination’ I mean that both its impact metric (which for most purposes is simply the title of the journal it was published in) and its actual readership are locked in (by the referees’s decision to publish it in a given journal) before any readers are allowed to see it. By ‘post-evaluation’ I mean that impact should simply be measured by the research community’s long-term response and evaluation of it.

Non-expert PUSH means that a pre-determination decision is made by someone outside the paper’s actual audience, i.e., the reviewer would not ordinarily choose to read it, because it does not seem to contribute sufficiently to his personal research interests. Such a reviewer is forced to guess whether (and how much) the paper will interest other audiences that lie outside his personal interests and expertise. Unfortunately, people are not good at making such guesses; history is littered with examples of rejected papers and grants that later turned out to be of great interest to many researchers. The highly specialized character of scientific research, and the rapid emergence of new subfields, make this a big problem.

In addition to such false-negatives, non-expert PUSH also causes a huge false-positive problem, i.e., reviewers accept many papers that do not personally interest them and which turn out not to interest anybody; a large fraction of published papers subsequently receive zero or only one citation (even including self-citations [Adler et al., 2008]). Note that non-expert PUSH will occur by default unless reviewers are instructed to refuse to review anything that is not of compelling interest for their own work. Unfortunately journals assert an opposite policy.

One man, one nuke means the standard in which a single negative review equals REJECT. Whereas post-evaluation measures a paper’s value over the whole research community (‘one man, one vote’), standard peer review enforces conformity: if one referee does not understand or like it, prevent everyone from seeing it.

PUSH makes refereeing a political minefield: consider the contrast between a conference (where researchers publicly speak up to ask challenging questions or to criticize) vs. journal peer review (where it is reckoned necessary to hide their identities in a ‘referee protection program’). The problem is that each referee is given artificial power over what other people can like—he can either confer a large value on the paper (by giving it the imprimatur and readership of the journal) or consign it zero value (by preventing those readers from seeing it). This artificial power warps many aspects of the review process; even the ‘solution’ to this problem—shrouding the referees in secrecy—causes many pathologies. Fundamentally, current peer review treats the reviewer not as a peer but as one who wields a diktat: prosecutor, jury, and executioner all rolled into one.

Restart at zero means each journal conducts a completely separate review process of a paper, multiplying the costs (in time and effort) for publishing it in proportion to the number of journals it must be submitted to. Note that this particularly impedes innovative papers, which tend to aim for higher-profile journals, and are more likely to suffer from referees’s IDPR errors. When the time cost for publishing such work exceeds by several fold the time required to do the work, it becomes more cost-effective to simply abandon that effort, and switch to a ‘standard’ research topic where repetition of a pattern in many papers has established a clear template for a publishable unit (i.e., a widely agreed checklist of criteria for a paper to be accepted).

The reviews are thrown away: after all the work invested in obtaining reviews, no readers are permitted to see them. Important concerns and contributions are thus denied to the research community, and the referees receive no credit for the vital contribution they have made to validating the paper.

In summary, current peer review is designed to work for large, well-established fields, i.e., where you can easily find a journal with a high probability that every one of your reviewers will be in your paper’s target audience and will be expert in all aspects of your paper. Unfortunately, this is just not the case for a large fraction of researchers, due to the high level of specialization in science, the rapid emergence of new subfields, and the high value of boundary-crossing research (e.g., bioinformatics, which intersects biology, computer science, and math).

Toward solutions

Next time I’ll talk about the software Christopher Lee has set up. But if you want to get a rough sense of how it works, read the section of Christopher Lee’s paper called The Proposal in Brief.


Open Access to Taxpayer-Funded Research

23 February, 2013

According to a White House webpage, John Holdren, director of the White House Office of Science and Technology Policy, has

… issued a memorandum today to Federal agencies that directs those with more than $100 million in research and development expenditures to develop plans to make the results of federally-funded research publicly available free of charge within 12 months after original publication.

This is already true for research funded by the National Institute of Health. For years some of us have been pushing for the National Science Foundation and other agencies to do the same thing. Elsevier and other companies fought against it, even trying to pass a law to stop it…. but a petition to the White House seems to have had an effect!

In response to this petition, Holdren now says:

while this new policy call does not insist that every agency copy the NIH approach exactly, it does ensure that similar policies will appear across government.

If this really happens, this will be very big news. So let’s fight to make sure this initiative doesn’t get watered down or undermined by the bad guys! The quickest easiest thing is to talk to the Office of Science and Technology Policy, either by phone or email, as explained here. A phone call counts more than an email.

One great thing about Holdren’s new memo is that it requires open access to experimental data, not just papers.

And one sad thing is that it only applies to federally funded research in the sciences, not the humanities. It does not apply to the National Endowment for the Humanities. Done well, research in the humanities can be just as important as scientific research… since most of our problems involve humans.


Elsevier: Strangling Libraries Worldwide

17 October, 2012

 

After academics worldwide began a boycott against Elsevier, this publisher claimed it would mend its ways and treat mathematicians better.

Why just mathematicians? Maybe they didn’t notice that only 17% of the researchers boycotting them are mathematicians. More likely, they’re trying a ‘divide and conquer’ strategy.

Despite their placating gestures, the overall problem persists:

Elsevier’s business model is to get very smart people to work for free, then sell their results back to them at high and ever-rising prices.

Does that sound sustainable to you? It works better than you might think, because they have control over many journals that academics want, and they sell these journals in big ‘bundles’, so you can’t stop buying just some of them. In short, they have monopoly power.

Worse, the people who actually buy the journals are not the academics, but university libraries. Librarians are very nice people. They want to keep their customers—the academics—happy. So they haven’t been very good at saying what they should:

Okay, you want to raise your prices? Fine, we’ll stop subscribing to all your journals until you lower them!

And so, libraries world-wide are slowly being strangled by Elsevier.

A while back, when the economic crisis hit here at U. C. Riverside, our library’s budget was cut. Journals eat up most of the budget, but librarians felt they couldn’t drop subscriptions to all the Elsevier journals, and Elsevier’s practice of bundling meant they couldn’t drop just some of them. The only ways to cut costs were to cut library hours, lay off staff, cut journals published by smaller—and cheaper!—publishers, and buy fewer books. Books can always be bought later… in theory… so they took the biggest hit. Our book budget was slashed to about a tenth of its original level!

The people most hurt were not mathematicians or scientists, but people working in the humanities. They’re the ones who use books the most.

And here’s a shocking story I recently got in my email. I’ll paraphrase, because the details of cases like this are kept secret thanks to Elsevier’s legal tactics:

I wanted to inform you that the University of X is negotiating our new contract with Elsevier for 2013–2015, and what effect Elsevier’s proclaimed changes have.

First of all, the university library has a 42% smaller budget in 2013 than in 2010 for books, journals, etc. So they are negotiating with many publishers, to be able to cancel more subscriptions than allowed in the existing contracts.

The Elsevier contracts for journal subscriptions ends in 2012, and for the so-called “Freedom Collection”—a bundle providing access to all non-subscribed journals—it ends in 2013. I asked the librarian whether there was a price reduction for the new contracts. He reported:

At the beginning of the negotiations he told the Elsevier sales representative, Mister Q, that the University of X has its back to the wall due to the 42% budget cut. Q offered the new contract with moderate price increase of around 5%. A price decrease was out of the question.

Our librarian asked whether he could cancel various subscriptions, many more than allowed in the expiring contract. Q agreed in principle—as long as the total price does not decrease! He was quite cooperative, and essentially offered various different knives to be stabbed with, such as:

• a price increase for the Freedom Collection

or

• an increase of the content fee: this fee, charged in addition to the subscription if one wants electronic access, could go up to 25%. This fee is charged even if one wants only the electronic access and no printed volumes.

Then our librarian asked Q what he should reply to my question about price decreases. Q sent a long reply including that:

– our national science foundation bought the Elsevier archive already some years ago. Therefore we would not benefit from the fact that the archives are now partly free.

– our university cancelled all its math subscriptions already in 2007. Therefore we do not benefit from the price decrease of math journals.

Then he explained at length that we would benefit from what they were doing “as part of our ongoing project to address the needs of the mathematics community”: “holding down 2013 prices, launching a Core Mathematics subject collection, convening an advisory Scientific Council for Mathematics – designed to meet the specific needs of the mathematics community, members of which were critical of Elsevier in the wake of the Cost of Knowledge petition.”

I hope you see why we all need to boycott Elsevier. Stop publishing our papers with them, stop refereeing papers for them, stop working as editors for them, and convince your librarian that it’s okay to unsubscribe to their journals. Please go to this website and join over 12,000 top researchers in this boycott.

For more information, click on this:



Free Access to Taxpayer-Funded Research — Act Now!

31 May, 2012

If you’re a US citizen, your taxes pay for lots of scientific research. If you sign this White House petition, you may get to see the research you paid for!

Just click this:


We need a total of 25,000 signatures before June 19th for this to land on the president’s desk. That sounds hard. But:

• On May 29th, we only needed 5825 more.

• On May 30th we only needed 4765 more.

• Right now, on May 31st, we only need 3354.

We can do it! Sign it and pass it on!

The petition says:

Require free access over the Internet to scientific journal articles arising from taxpayer-funded research. We believe in the power of the Internet to foster innovation, research, and education. Requiring the published results of taxpayer-funded research to be posted on the Internet in human and machine readable form would provide access to patients and caregivers, students and their teachers, researchers, entrepreneurs, and other taxpayers who paid for the research. Expanding access would speed the research process and increase the return on our investment in scientific research. The highly successful Public Access Policy of the National Institutes of Health proves that this can be done without disrupting the research process, and we urge President Obama to act now to implement open access policies for all federal agencies that fund scientific research.

If you want more information, read about the Federal Research Public Access Act. This is a bill that would make taxpayer-funded research freely available, while still preserving the legitimate rights of publishing companies.


The Education of a Scientist

29 February, 2012

Why are scientists like me getting so worked up over Elsevier and other journal publishers? It must seem strange from the outside. This cartoon explains it very clearly. It’s hilarious—except that it’s TRUE!!! This is why we need a revolution.

(It’s true except for one small thing: in math and physics, Elsevier and Springer let us put our papers on our websites and free electronic archives… though not the final version, only the near-final draft. This is a concession we had to fight for.)

What can you do? Two easy things:

• If you’re an academic, add your name to the boycott of Elsevier.

• If you’re a US citizen, sign this White House petition before March 9.

Why the problem is hard

Why is it so hard it is to solve the journal problem? Here’s a quick simplified explanation for outsiders—people who don’t live in the world of university professors.

There are lots of open-access journals that are free to read but the author needs to pay a fee. There are even lots that are free to read and free for the author. Why doesn’t everyone switch to publishing in these? Lots of us have. But most haven’t. Two reasons:

1) These journals aren’t as “prestigious” as the journals owned by the evil Big Three publishers: Elsevier, Springer, and Wiley-Blackwell. In the last 30 years the Big Three bought most of the really “prestigious” journals – and a journal can’t become “prestigious” overnight, so while things are changing, they’re changing slowly.

Publishing in a “prestigious” journal helps you get hired, promoted, and get grants. “Prestige” is not a vague thing: it’s even measured numerically using something called the Impact Factor. It may be baloney, but it is collectively agreed-upon baloney. Trying to make it go away is like trying to make money go away: people would not know what to do without it.

2) It’s not the professors who pay the outrageous subscription fees for journals – it’s the university libraries. So nothing instantly punishes the professors for publishing in “prestigious” but highly expensive journals, except the nasty rules about resharing journal articles, which however are invisible if you live in a world of professors where everyone has library access!

So, the problem is hard to solve. The fight will be hard.

But we’ll win anyway, because the current situation is just too outrageous to tolerate. We have strategies and we’re pursuing lots of them. You can help by doing those two easy things.