If I published in or reviewed for PLoS, I’d be pissed off too.

Cameron Neylon responds to the allegations that PLoS is a pay-to-play vanity press:

That an author pays model has the potential to create a conflict of interest is clear. That is why, within reputable publishers, structures are put in place to reduce that risk as far as is possible, divorcing the financial side from editorial decision making, creating Chinese walls between editorial and financial staff within the publisher.  The suggestion that my editorial decisions are influenced by the fact the authors will pay is, to be frank, offensive, calling into serious question my professional integrity and that of the other AEs. It is also a slightly strange suggestion. I have no financial stake in PLoS. If it were to go under tomorrow it would make no difference to my take home pay and no difference to my finances. I would be disappointed, but not poorer.cameronneylon.net, Science in the Open » Blog Archive » In defence of author-pays business models, Apr 2010

You should read the whole thing.

Reblog this post [with Zemanta]

Science Blogging Benefits Everyone

My colleague David Crotty has a rant at Bench Marks wherein he suggests that Nature’s blogging advocacy is just a shallow attempt to get more content for Nature Blogs, and that scientists blogging is just a fad that can’t replace mainstream media coverage of science and won’t amount to much otherwise. He’s certainly entitled to his opinion, but I think there’s another way to see things and I’d like to present a counterpoint to his Nicholas Carr-ying on.
Continue reading

I’ve joined Mendeley as Community Liaison.

'Taft in a wet t-shirt contest is the key image here.Reference managers and I have a long history. All the way back in 20041, when I was writing my first paper, my workflow went something like this:
“I need to cite Drs. A, B, and C here. Now, where did I put that paper from Dr. A?” I’d search through various folders of PDFs, organized according to a series of evolving categorization schemes and rifle through ambiguously labeled folders in my desk drawers, pulling out things I knew I’d need handy later. If I found the exact paper I was looking for, I’d then open Reference Manager (v6, I think) and enter the citation details, each in their respective fields. Finding the article, I’d the select it and add it to the group of papers I was accumulating. If it didn’t find it, I’d then go to Pubmed and search for the paper, again entering each citation detail in its field, and then do the required clicking to get the .ris file, download that, then import that into Reference Manager. Then I’d move the reference from the “imported files” library to my library, clicking away the 4 or 5 confirmation dialogs that occurred during this process. On to the next one, which I wouldn’t be able to find a copy of, and would have to search Pubmed for, whereupon I’d find more recent papers from that author, if I was searching by author, or other relevant papers from other authors, if I was searching by subject. Not wanting to cite outdated info, I’d click through from Pubmed to my school’s online catalog, re-enter the search details to find the article in my library’s system, browse through the until I found a link to the paper online, download the PDF and .ris file(if available), or actually get off my ass and go to the library to make a copy of the paper. As I was reading the new paper from the Dr. B, I’d find some interesting new assertion, follow that trail for a bit to see how good the evidence was, get distracted by a new idea relevant to an experiment I wanted to do, and emerge a couple hours later with an experiment partially planned and wanting to re-structure the outline for my introduction to incorporate the new perspective I had achieved. Of course, I’d want to check that I wouldn’t be raising the ire of a likely reviewer of the paper by not citing the person who first came up with the idea, so I’d have some background reading to do on a couple of likely reviewers. The whole process, from the endless clicking away of confirmation prompts to the fairly specific Pubmed searches which nonetheless pulled up thousands of results, many of which I wasn’t yet aware, made for extraordinarily slow going. It was XKCD’s wikipedia problem writ large. Continue reading

Could this be the Science Social Networking killer app?

There are tons of social tools for scientists online, and the somewhat lukewarm adoption is a subject of occasional discussion on friendfeed. The general consensus is that the online social tools, in general, which have seen explosive growth are the ones that immediately add value to an existing collection. Some good examples of this are Flickr for pictures and Youtube for video. I think there’s an opportunity to similarly add value to scientists’ existing collections of papers, without requiring any work from them in tagging their collections or anything like that. The application I’m talking about is a curated discovery engine.

There are two basic ways to find information on the web – searches via search engines and content found via recommendation engines. Recommendation engines become increasingly important where the volume of information is high, and there are two basic types of these: human-curated and algorithmic. Last.fm is an example of a algorithmic recommendation system, where artists or tracks are recommended to you based on correlations in “people who like the same things as you also like this” data. Pandora.com is an example of the other kind of recommendation system, where human experts have scored artists and tracks according to various components and this data feeds an algorithm which recommends tracks which score similarly. Having used both, I find Pandora to do a much better job with recommendations. The reason it does a better job is that it’s useful immediately. You can give it one song, and it will immediately use what’s known about that song to queue up similar songs, based on the back-end score of the song by experts. Even the most technology-averse person can type a song in the box and get good music played back to them, with no need to install anything.

Since the reason for the variable degree of success of online social tools for scientists is largely attributed to the lack of participation, I think a great way to pull in participation by scientists would be to offer that kind of value up-front. You give it a paper or set of papers, and it tells you the ones you need to read next, or perhaps the ones you’ve missed. My crazy idea was that a recommendation system for the scientific literature, using expert-scored literature to find relevant related papers, could do for papers what Flickr has done for photos. It would also be exactly the kind of thing one could do without necessarily having to hire a stable of employees. Just look at what Euan did with PLoS comments and results.

Science social bookmarking services such as Mendeley, or perhaps search engines such as NextBio, are perfectly positioned to do something like this for papers, and I think it would truly be the killer app in this space.

Thompson Scientific has a closed science search engine.

They sent me a survey and asked me some simple questions, but I don’t think they asked me the right ones, so I’m going to give a free-form review here. I think it’s a great idea, and presents some features not available anywhere else, but it’s missing some important content, and like everything Thompson does, it suffers from some useability issues.
Continue reading

I have reservations about WebCite

Via BBGM, I hear of WebCite, an on-demand Wayback Machine for web content cited within academic publications. It’s important to make sure that links to web content in academic publications don’t fail to resolve to their intended content over time, but how valuable is it, and whose responsibility is it?

If the citing author feels it’s important, they should make a local copy. They have the same right to make a local copy as a repository does. If the cited author feels the link is important, he should take steps to maintain accessibility of his content. If neither of these things happen, this raises the question whether the value of the potentially inaccessible content is greater than the cost of a high-availability mirror of the web whose funding will come from as yet unspecified publishing partners.

These things aside, there are some important technical flaws with the project:

  • The URL scheme removes any trace of human readable information. It’s another one of those damn http://site.com/GSYgh4SD63 URL schemes.
  • All sites have downtime. Is the likelihood of any given article being available made greater by putting it all under one roof?
  • What about robots.txt excluded material? A search engine isn’t allowed to archive it, and many publishers have somewhat restrictive search engine policies.
  • Of course, it’s much easier to find flaws in a solution than to come up with a solution in the first place, but it seems to me that a DOI-like system where semantic permalinks could be used that would always point to content wherever it moved around the web would work better, lead to a more complete index, and be much cheaper to run, as well. I know they chose archiving as opposed to redirecting because they wanted to link to the version of the page on the day it was cited, and that’s a good idea, but if having a copy of the page as it was is important, the author needs to make a local copy, rather than hope some third-party will do it for him.

    John Udell likes it, but I’m feeling like it needs a little work.