Text

CHORUS is now live - how does it stack up to PubMed?

What is CHORUS and why is it important to know about if you’re an academic? From the FAQ (bold emphasis mine):

CHORUS (Clearinghouse for the Open Research of the United States) is a not-for-profit public-private partnership to provide public access to the peer-reviewed publications that report on federally funded research. Conceived by publishers as a public access solution for funding agencies, research institutions, and the public, CHORUS is in active development with more than 100 signatories (and growing). Five goals drive CHORUS’ functionality: identification, discovery, access, preservation, and compliance. CHORUS is an information bridge, supporting agency search portals and enabling users to easily find free, public access journal articles on publisher platforms.

Only it fails in the one thing that it claims to support, public access - at least as far as I can tell so far. And this is the big worry we’ve had all along, that a paywall publisher backed solution to the White House’s OSTP mandate would not work. For a critical overview of the concerns see Michael Eisen’s comments from one year ago when CHORUS was announced.

Why isn’t CHORUS working?

Let us jump right into doing a search. Here’s an example query for NIH funded research. When I ran this search today (August 1, 2014) I got only 3,775 results. Hmmm. That can’t be right, can it? Only 3,775 NIH funded articles? Moving on…

The first result I got was to an article published July 2014 in the American Journal of Medical Genetics. Click the DOI expecting public access, and I hit a paywall. Oh wait, that’s right - CHORUS also indexes embargoed research set to actually be public open access in 12-24+ months. Next several search results - same paywall. Not until the fifth result do I reach an Open Access article.

OK fine. Perhaps it is reasonable to include a mix of embargoed papers with public open access papers - even though OPEN RESEARCH is in the name of CHORUS. I’ll just click the filter for actual public open access papers and see my results. Hmm, unfortunately there is no filter for actual public open access papers. Ruh-rohs. 

And there does not appear to be any labeling on search results indicating whether a paper is actually public open access or still embargoed (for some unknown period of 1-2 years). Ruh-rohs again.

Are we just seeing teething pains here? In some things for sure, for example only having 3,775 NIH results (when there are millions). It can take time to get all of that backlog from publishers (though I don’t know why they’d launch with such a paltry number). However, I don’t believe the lack of Open Access labels or ability to search only for papers already Open Access (rather than embargoed) is a teething problem. That’s a major oversight and makes you wonder why it was left out in a system designed by a consortium of paywall publishers. I can’t imagine SPARC, for example, leaving out an Open Access filter if they had built this search.

What else is wrong with CHORUS? 

The above was just one technical problem, albeit a very concerning one. The main issue is the inherent conflict of interest that exists in allowing subscription publishers the ability to control a major research portal. As Michael Eisen put it, that’s like allowing the NRA to be in charge of background checks and the gun permit database.

In the title I asked, “how does CHROUS stack up to PubMed?” We need to make this comparison since one of the aims of CHORUS is to direct readers to the journal website, instead of reading/downloading from PubMed Central (PMC).

Perhaps most importantly, CHORUS allows publishers to retain reader traffic on their own journal sites, rather than sending the reader to a third party repository.

http://chorusaccess.org/faq/#3

And if you believe Scholarly Kitchen then PMC is robbing advertising revenues from publishers and PMC is costing taxpayers money as a useless redundant index of actual public/open access papers. Let’s not mince words, Scholarly Kitchen (and by extension the Society for Scholarly Publishing) believes that PubMed and PMC should be shut down. No one believes taxpayer money should be needlessly wasted, but it is a tall order to replace PubMed and PMC, so our expectations for CHORUS should be just as high.

Unfortunately, it is clear from using the CHORUS search tool that I have far less access and insight into publicly available research. And while an open API is slated for the future, it is questionable whether it will be as feature rich as NCBI’s own API into PubMed and PMC. 

CHORUS also fragments an otherwise aggregated index with PubMed. CHORUS looks to index only US-based federally funded research that is either Open Access or slated to be after a lengthy embargo. This means you still need to rely on PMC to find a non-US funded Open Access article. Clearly we still want that since it helps US researchers, right? Then why shut PMC down?

CHORUS isn’t free either. They’ve set the business model up such that publishers pay to have their articles indexed there. Do you think publishers are going to absorb those costs, or pass it along to authors/subscribers? The fact that CHORUS won’t index unless a publisher pays is rather scary; especially if CHORUS were to ever become the defacto database for finding research.

In Summary

I think CHORUS will improve over time, for sure. My worries though are the inherent conflicts of interest and that a major mouthpiece for CHORUS is calling for the removal of PubMed and PMC. I’m also skeptical whenever I see an organization using deceptive acronyms. CHORUS is not a database of Open Research as its name suggests. At least not ‘Open’ in the sense that the US public thinks of open.

You see, if CHORUS can convince the public and US Congress or OSTP that research under a two year embargo is still ‘open’ then they’ve won. It’s a setback for what is really Open Access. Nothing short of marketing genius (or manufactured consent) to insert Open Research into the organizational name. 

I think these are legitimate concerns that researchers and the OSTP should be asking of CHORUS.

Text

What have we learned about ourselves from the Eich/Mozilla controversy?

So Brendan Eich has resigned as CEO from Mozilla. From the words of Mozilla Executive Chairwoman Mitchell Baker, this wasn’t a result of his past donation to Proposition 8 in California banning gay marriage, but rather “It’s clear that Brendan cannot lead Mozilla in this setting…The ability to lead — particularly for the CEO — is fundamental to the role and that is not possible here." i.e. the controversy that this brewed was tarnishing Mozilla’s reputation, trust, brand, etc. 

Unfortunately, this is one of those situations where no one ends up feeling happy. This got ugly on both sides. It is sad that the co-founder of Mozilla and the creator of javascript had to resign. It sucks, I’m sure most of us had high hopes. At the same time Mozilla was being led by someone who wouldn’t apologize for wanting to ban gay marriage, so people had every right to voice disagreement over that promotion. 

Now however, there is a lingering “meta” controversy over whether the way this was handled on both sides was wrong or right. Was the “anti-Eich” crowd too vengeful, too close to being what has been described as being a lynch mob, were they being hypocritical and intolerant? Were the folks supporting Eich being insensitive to a growing civil rights movement, misunderstanding what Mozilla represents, erroneously mixing a specific case with a hypothetical slippery slope?

In the larger picture, these questions and issues exposed during the controversy are just one more signal that “tech” still has much growing up to do. On the one hand tech is dominating every industry and part of our lives. “Software is eating the world,” a quote from Marc Andreessen, which is often used to describe what is going on. And tech is still in its infancy or teenage years in terms of how long it has been with us, which brings challenges as it inundates our culture and organizational behavior practices. Tech is even influencing our politics now, as seen with Twitter in various countries, and the SOPA movement. Some say the tech community went too far with SOPA when websites blacked themselves out in protest, and of course the sexism that continually rears its head in tech is immaturity at its best.

When you have something that is massively influencing every part of our lives, but is still immature, then it can only lead to more “meta” controversies like the Mozilla one. We (i.e. the community, public) simply just don’t know how to appropriately react yet to these situations, it’s going to take time to adjust to what tech in our lives means. Although I think most calling for Eich’s resignation were proportionate in their response, their were outliers who went too far in how they handled it. There will always be people who go too far of course, but now tech can ignite these crowds in a blink of an eye and carry people with it who would not normally participate. That said, I’m confident that as we grow to understand what tech means in our lives that we will resolve that issue in time.

I think we would all be served well if a post-mortem was done for this particular controversy. And this increasingly common situation of how “tech reacts” deserves to be studied as a larger whole. There is an awful lot that we could learn about ourselves and tech in our lives. Important as tech and its social impact isn’t going away, it will only increase.

Text

The most enlightening reveal of the Mozilla CEO controversy

A new exclusive interview with anti-LGBT supporter Brendan Eich on CNET shows that the controversy is not dying down. My own thoughts on why this is a bad choice as CEO for Mozilla Foundation were posted a few days ago. Since then, we’ve seen three board members step down in conflicting reports stating they resigned in a form of protest, contrasting with Eich in the CNET interview and the remaining board stating they were long planned for. 

Needless to say, things are very foggy over at Mozilla and the future is still unclear. One thing is clear, however, that the leadership (CEO and Board) in fact is incapable of leading. Even if they now decide to fire Eich and replace him with a more forward-thinking CEO, it will be only because they’ve caved after sitting back for weeks to measure public opinion; that’s playing politics rather than leading. The Mozilla board I’d like to see is one that knows what the right thing to do is from the get-go or decisively changes course if a mistake is made. What this controversy has revealed is that Mozilla plays politics, it doesn’t lead. That paints a picture where the future at Mozilla will inevitably be filled with more mistakes.

Mozilla doesn’t share my values - both in terms of installing a CEO who doesn’t support LGBT rights and in it’s overall leadership characteristics. It’s disappointing. I’d like to see the remaining board resign and I won’t be returning to any Mozilla products until that happens. 

Text

Mozilla needs to skate to where the puck is going, not where it’s been. LGBT rights

I just don’t understand WTF the Mozilla Foundation was thinking on this one. It’s akin to making someone a CEO who donated to segregation campaigns in the 1960s. You just wouldn’t do that.

Gay rights have not yet achieved the same acceptance as other civil rights have, but they will do so undoubtedly one day. And when that day arrives, and it already has in most of Mozilla’s fan base markets, it is going to haunt the Mozilla Foundation even more than the decision today is doing. Mozilla looks to the future in all it does, except when it comes to its leadership apparently. The future profile of CEOs will not have anti-LGBT beliefs, and Mozilla needs to be skating to where the puck is going, not where it’s been.

What is really troubling is that the origins of the tech industry, the West Coast tech, were about tolerance and civil rights. Personal computers and related technologies were about freedom from oppression. For a tech non-profit to install an anti-LGBT CEO is completely 180 from why and how tech evolved from its early days. Technology isn’t just about what you do, it’s also about who does it. They go hand in hand.

If this was simply an oversight that was missed in due diligence of the CEO’s past then OK, but make the change happen. 

Text

Does the Google Cloud services price drop spell trouble for AWS? It depends.

With Google announcing massive price drops (https://developers.google.com/storage/) it has a lot of developers and tech managers re-thinking the use of AWS, Rackspace, etc. Certainly the $.026/GB of monthly storage and lowered compute engine prices make stiff competition. I am not convinced yet, however. 

First, depending on what one is trying to accomplish, the new bandwidth prices announced by Google are still more expensive than AWS once you reach a certain volume. For heavy bandwidth out users then, it may not make sense to use Google over AWS if pricing is your only concern. 

A larger issue is one of trust, and opinion shows many developers and decision makers are in agreement with me on this one. Google has failed users and developers countless times as it has pulled its APIs and services. Google has a one year notification term for its cloud computing services, but that can change, and one year is possibly not enough time if your entire business is structured around the service. The fine details of the cloud computing terms of use with Google could also give one pause compared to AWS.

Further, AWS revenue accounts for ~7% of Amazon’s overall business revenue. In comparison, the Google cloud computing business brought in roughly 1.5% of Google’s overall 2013 revenue. That alone is enough to give me a second thought when considering trusting my business or computing needs with Google due to its lengthy history of pulling services. 

There are of course other reasons to distrust Google, which I won’t go into now.

AWS is likely to follow Google and continue dropping its prices as well. So, any prudent decision-makers should take a wait-and-see approach before jumping ship. And even then make a careful analysis (including future growth scenarios) of how your business or process actually utilizes the various cloud services to determine the real and future costs involved. 

Text

FIRST Act isn’t the first to use doublespeak against the advancement of science

The Frontiers in Innovation, Research, Science and Technology (FIRST) Act (link) is doublespeak for “we’re actually going to limit Open Access.”

The FIRST Act is yet another bill that is winding its way through the US Congress that despite making claims FOR science will actually reduce the availability of Open Access. Luckily the Scholarly Publishing and Academic Resources Coalition (SPARC) has clarified the damage that this bill would actually do to scientific advancement within the U.S. PLOS has done another writeup of the severe consequences this bill would bring. 

In the past similar bills such as the RWA ”Research Works Act” backed by the Association of American Publishers and many paywall publishers have used this doublespeak. The Clearinghouse for the Open Research of the United States (CHORUS), a publisher backed proposal,  is another initiative filled with doublespeak, with the real aim to control access - not open it up. And more recently the “Access to Research" initiative from publishers does the opposite of what its title proclaims. It limits access to research in the digital age by adding a physical barrier and forcing you to travel hundreds of miles to a participating library instead of providing access in the convenience of your lab or home. 

What really fascinates me, however, is the continued use of marketing doublespeak in these legacy publisher proposals to manufacture consent and distort the facts for financial gain. That they are pronounced with a straight face each time makes me just a little sick inside that people like this actually exist. The opposite of heroes, value creators, and leaders. If you haven’t noticed, these tactics grind my gears to the point of evoking a visceral emotional response.

Now I’ve looked to see who outside Congress is backing the FIRST Act by way of either public support or Congressional campaign donations and have yet to find a connection to the usual suspect publishers or associations. Please leave a comment if you do find a connection. 

Update

As Björn Brembs points out, a number of paywall journals and publishers have donated to the Congressmen responsible for bringing the FIRST Act to the House of Reps. This is more than a smoking gun leading back to Elsevier, and a few other large publishers known for backing previous anti-OA bills. 

Text

Neylon highlights another misleading survey - this one from NPG

Thanks to this tweet by @CameronNeylon we see a very loaded question about Open Access licensing consequences from NPG. I should say also that there are a few other misleading questions from this NPG survey - which look to be as much as propaganda as (poorly designed) survey material. 

image

This seems to be the new scare tactic for anti-OA activists. Explain one possible commercial use case, one likely to offend or upset academics, while neglecting to state the many other reasons one would want to allow commercial re-use: teaching in academic situations (if the academic is paid that’s commercial use), text/data mining for new cures, in certain cases physician’s may hesitate to use or cite the research after developing new tools based on that information, etc.

Maybe you have a moral reason to not want a biopharma giant to profit off of your Open Access article. Fine, fair enough, but that actually doesn’t prevent them from using the information - facts can’t be copyrighted. More often than not, the use of a Non-Commercial OA license (e.g. CC-BY-NC) has the opposite effect from what the author hoped to achieve. Peter Murray Rust explains this in an excellent writeup here. An NC license doesn’t prevent the publisher you use from profiting off of the material, it won’t stop pharmaceutical companies, but it does deter others with many legitimate use cases.

Had all software development in the early days of the 60s, 70s, and 80s restricted commercial use then we wouldn’t be here today discussing this. Open licensing with explicit reuse for commercial interests has been the foundation of software that powers a majority of the world’s websites, and software that powers research activity in academic institutions. The parallels with Open Access articles and the early days of open sourced software are massive.

For sure, all academics should be aware of the possible uses of their research, but the point is to make them fully aware of all use cases, not just a select few intended to scare. And we also need to understand that choosing a restrictive NC license may have unintended consequences as well. 

Updated to add: Many, including the Budapest Open Access initiative, do not consider OA licenses with an NC clause to actually be Open Access. I agree with this position.

Text

Is Nobel Laureate Randy Scheckman being a hypocrite? Bollocks.

Whoa. Some serious debate flying around after the newly minted Nobel Laureate and Editor-in-Chief of eLife wrote that journals like Nature, Cell, and Science are damaging science

On one side you find the supporters, such as co-founder of PLOS and UC Berkeley Professor Mike Eisen, who hopes Randy’s actions can inspire others. In the other corner are the haters shouting hypocrisy.

The way I see it, Randy had two options:

1. Say/do nothing at all, and thus inspire no one to take action.

2. Do what he did.

I’m on Randy’s side here. If we’re going to start making the changes that are needed within academia then someone must speak up, even it comes laden with ad hominem attacks of hypocrisy and conflicts of interest. And note my own COI as a co-founder of the Open Access journal PeerJ.

Let’s examine the fallacies of the naysayers’ arguments:

1. Sheckman’s words ring hollow because eLife, like CNS (Cell, Nature, Science), has a high rejection rate, even though it is Open Access. As editor-in-chief of eLife he has a conflict of interest and should not make such statements.

This argument is ignoring the actual message and its possible impact. Whether eLife is a luxury journal or not doesn’t change the message being told. Same with the conflict of interest. Those are all separate issues from the message and how people can act on it.

Additionally, it’s naive to think that everyone boycotting CNS would all of a sudden 1) start publishing with eLife and 2) that other publishing options (PeerJ, PLOS, small society journals, F1000Research, preprints, etc) wouldn’t grow.

2. Even if everyone boycotts CNS, it won’t change things because the next three highest impact factor journals will replace them.

This is a non sequitur argument and the silliest one of all. It ignores the fact that if CNS actually did go out of business, then Sheckman’s words will have achieved an f’ignly astounding result. Do these naysayers actually believe if everyone boycotted CNS that it wouldn’t have other knock-on effects within the overall academic debate on impact factor? 

What would really happen if everyone were to boycott CNS is that our funding bodies, governments, academic departments, etc would take notice. It will have meant that academics’ habits have actually changed. That will lead to other changes. It won’t just lead to the next three journals replacing CNS; that conclusion is unsupported as can be.

3. Sheckman can only say boycott CNS now that he has secured his Nobel prize after publishing more than 40 times in those journals. Younger scientists don’t have that option.

This is an ad hominem argument. Again, whether Sheckman is being a hypocrite or not has no bearing on the message that things must change in order to improve scientific research. Whether younger scientists have the luxuries that Sheckman has now or not has no bearing on the message. The message is “things must change.”

That we’re now debating the merits of Sheckman’s call means what he said is already having an impact. And let’s remember that most hearing his message are not academics, but the public who are unaware of the issues at hand, but still have the power to change things through their elected officials.

If a Nobel Laureate isn’t allowed to state these things, then who is allowed? Reality is that everyone’s allowed, but not everyone has the voice that Randy now has. He can choose to remain silent, or he can try to have an impact that perhaps may help eLife, but will undoubtedly help advance science and other publishing experiments that are sorely needed. A rising tide raises all ships.

Finally, whether his words will have any real results at the end of the day or not isn’t a reason to stay silent. When we’re trying to push the boundaries we go into action knowing full well that failure is a possibility. If success were guaranteed then we’d have no need for inspiration.

Kudos to Randy Sheckman for having the courage to do what he did, despite knowing the heat he’d take. That makes him more worthy of the Nobel than ever.

Text

WTF is up with Apple of late?

This from the Guardian discussion how the new iOS7 animations are literally making people ill. And the Hacker news discussion. And I tend to agree.

Last year it was the iOS6 maps disaster (still one really). 

All in all, the design choices post-Jobs have been terrible. It’s as if Apple has stopped doing user testing prior to release (if they ever bothered with Jobs). 

This is Apple (and possibly Jony Ive) - fail.

Text

Thoughts on ALPSP and future of society publishers

I returned yesterday from Birmingham, UK and the 2013 ALPSP international conference. It was great to listen, to present, and of course nice that PeerJ won an award for its publishing innovation (we’ll do a proper post about that on the PeerJ blog shortly). 

I spent some time talking with different society publishers and staff. This was new for me. My co-founder at PeerJ is much more seasoned in the publishing world than me - I’m the outsider coming from more of a quasi tech/academia/academic software background. Thus, my perspective on the current situation facing publishing is probably refreshing, naive, flat out wrong in some areas, but dead right in other areas. Yes, I’m qualifying what I’m about to say next :) …

If I had to choose one analogy to describe the state of publishing it would be a deer paralyzed in a beam of on-coming headlights. From numerous discussions at the annual ALPSP meeting, it became apparent that society publishers in particular are standing still in fear, unsure of which way to turn, or to make that risky move. From a high-level bit of questioning, it seemed many publishers didn’t have the right mix of people in their organizations for the digital world.

There was an interesting plenary session with Ziyad Marar (SAGE), Timo Hannay (Digital Science), Victor Henning (Mendeley/Elsevier), and Louise Russell (a publishing consultant). Ziyad and Timo seemed to have opposing perspectives on what a publisher today should be composed of or targeting. Ziyad was on the side of focusing on content, while TImo more on the side of focusing on the tech. That’s a simplification, and both of them probably value and implement both in their orgs, but the extreme views are the two sides of what I see in publishers today. Those who do not have people in place, either through empowerment or directly though titled positions, to make technology a center piece of their organization risk being stuck in the headlights.  

I’ll be even more specific than technology, it’s user experience. We can all blame Apple for this one too. It may not be dominant over content just yet, but it’s coming, and those who do not have the tools and people in place will be left behind. This was missing in the organizations of many who I spoke with at ALPSP. And to do user experience right, you need to be focusing on the right technologies and the right product strategies,  with the right people. I gave a high-level talk on cloud computing and many commented how they just didn’t have the people within the society to make it possible. That’s a mistake, not because cloud computing is the answer, but because you can’t then focus on building the tools needed to please the future reader, author, reviewer, etc.

What’s also interesting is that user experience isn’t something new to publishing, it’s been going on for 300 years. We think of publishers as just delivering content, but they’ve been tweaking the layout and typography of that content for centuries to make it more legible, more comprehensible, etc. That’s user experience. To make that happen today though requires people with different skill-sets than even a decade ago, and those people are either avoiding careers in publishing, not given priority, not empowered enough, or not even considered.

Before Pete (PeerJ co-founder) and I announced PeerJ in 2012 I related to him a little research that I had done on PLOS and lack of technology focus. This came about because we wanted people to know how PeerJ would be different than what had come before. I went through the WayBackMachine on the Internet Archive to look at PLOS’s website history. One thing stood out to me - it took several years before any tech-related people started to appear in the staff list and even today (like other publishers) tech empowered employees are not in positions of business strategy. I wanted PeerJ to make engineers equal to the editorial positions, and that’s how we’re different. That’s what’s needed if society publishers are going to continue. 

Really, it isn’t tech versus content. They support each other, the only problem is that there are a lack of people in the position to make it happen today. Yesterday’s typographers are today’s user experience engineers, today’s human-computer interaction experts, today’s software engineers. That’s what scared me the most in all of my conversations at ALPSP, the missing people.