About Mr. Gunn

Science, Scholarly Communication, and Mendeley

Do we really want a global town square?

In 2005, Lawrence Summers, president of Harvard, former US Secretary of the Treasury and Chief Economist of the World Bank, gave a speech at NBER where he discussed some employment data. He said he was offering a positive, observational view and not a judgemental or normative view.

…the data will, I am confident, reveal that Catholics are substantially underrepresented in investment banking […], that white men are very substantially underrepresented in the National Basketball Association, and that Jews are very substantially underrepresented in farming and in agriculture. These are all phenomena in which one observes underrepresentation and I think it’s important to try to think systematically and clinically about the reasons for underrepresentation.

He thought that these prefatory statements, and that he was attempting to be a little provocative and not speaking on behalf of Harvard, would have opened up a thoughtful discussion of various factors leading to under-representation of women in tenured positions in science at top universities. He could not have misjudged his audience more poorly.1 Shortly thereafter, he resigned as Harvard’s president ahead of a no-confidence vote by faculty.

A few years before the fateful speech, during his tenure as Harvard president, a website created by a Harvard student called Facemash almost got its creator, Mark Zuckerberg, expelled. Two years later, Twitter was founded. Looking back from 2023, we have seen many intellectuals and public figures stumble on these platforms, and their mistakes generally fall into the same category of error. It’s one anyone in a public speaking role must learn to avoid.

When connecting audio equipment, you have to be careful to avoid an impedance mismatch to avoid lifeless instruments, digital glitches, and fried amps. In public speaking, you have to be careful to avoid a stage mismatch to avoid lifeless audiences, digital mobs, and fried jobs. I’m not talking about the thing the podium is on; I’m talking about Robert Kegan’s stages of human development. According to this framework, people go through at least 3 and up to 52 stages of development over our lives.


Stage 13: A very young child can sorta control their motor skills, but their impulses and perceptions aren’t things they think about. A reflection of a baby in a mirror will be treated as real.
Stage 24: Impulses and perceptions can be thought about, but needs and desires feel objectively real. A child in this stage can understand why they’re hungry and do something about it, but won’t question their interest in playing a game.
Stage 35: In this stage, interests and desires can be examined, and, sadly, we lose our ability to play like children. We vibe with people and are emotionally engaged, but we might also panic or yell at someone in anger.
Stage 46: We understand how the influences of those around us shape our identity and yet sometimes do things that don’t align with that. We learn the rules of society and how to fit into systems. We can keep a cool head when others are panicking, but might suppress emotions or feel disconnected. A judge would use Stage 4 thinking to assign a criminal penalty according to the law, even when public outcry demands a harsh sentence.
Stage 57: We understand many different kinds of systems and can reason about how different systems would work in a given situation. We have many different identities and don’t feel like any particular one is the “true self”. Emotions are welcomed, even when they’re difficult, but what behavior results is not deterministic. A judge sparing a young mother from jail may be using stage 5 thinking.

Unless there are people having tantrums and playing carefree in the aisles, you can expect most people are primarily using a stage 3 mode of relating to the world and to your speech. Here’s the thing: While a comment made from a Stage 4 perspective can appear heartless when viewed from a Stage 3 perspective, a comment made from Stage 5 can look a partisan Stage 3 comment. Most viral outrage comes from this kind of stage mismatch. An ironic joke about AIDS not being just an African thing requires an ability to consider the different identities that make up a person, characteristic of stage 5. Heard from a stage 3 perspective, it’s enraging. If you are an intellectual, you probably spend more time than most around people relating to the world using strategies from stages 4 and 5, and you will get used to saying complex things that are nevertheless understood. This is where people mess up. They think they’re in a cozy little salon, but they’re in the town square.

Twitter and Facebook have been called our “global town square”. Those who coined the phrase no doubt meant to vaguely gesture at a bucolic vision and egalitarian ideal of free speech, but I think the metaphor is more apt than that.

Norman Rockwell's Freedom of Speech painting, showing a middle-class white guy standing up in a meeting, while various other people look on approvingly.

You are not here.

A picture of Times Square in NYC

You are here.

As in Times Square, you are surrounded by many people, not all of whom you can see, not all of whom have your best interests in mind, and not all of whom can avoid lashing out if something you say triggers jealousy or rage. In the global Times Square, you don’t just produce content, you are content, and since the invention of social media, you are always in the global Times Square.

The big thing LLM interfaces are missing: dialogue

Daniel Tunkelang brought my attention to a lovely rant from Amelia Wattenberger about the lack of affordances in chat interfaces and I began to wonder what it means for a chat interface to have an affordance. In other words, what’s the obvious thing to do with a chat interface like it’s obvious that a glove can be put on a hand to protect it? The obvious thing to do is to have a conversation, but people building products miss the most important thing about conversation: it’s a process, not a transaction. Product people think the obvious thing to do with an LLM chat interface is to ask questions and get answers and their critics quickly respond that the answers are merely plausible sentences and any truth is incidental. This whole thing has become tiresome and no one is putting their finger on the heart of the issue to move the conversation forwards. Taking a step back to consider how a conversation with a subject matter expert goes quickly reveals the confusion here. How does a conversation with a subject matter expert go? You start by asking bad questions and you get questions from them in return and then you ask better ones and through a process of back-and-forth dialogue, you end up with a better understanding that is often very different from what you thought you were after when you started the conversation, as this person seeking help with a regular expression to parse HTML famously discovered. It’s the process of dialogue that’s the important thing here, so thinking of the exchange in terms of a transaction is just a confused conceptual model. To make an LLM product that really delivers value through a chat interface, you have to provide dialogue as an affordance.

There are lots of different kinds of conversations that people have. Do you approach the LLM as an all-knowing oracle or a creative companion or a virtual assistant or something else? What needs to be apparent when someone enters a conversation with an LLM? Looking again at how a conversation is entered with an oracle or a creative companion or an assistant provides some hints. An expert may present themselves as having a degree, a wizened face, a tweed coat with elbow patches, or an office in a old, ivy-covered hall of knowledge, but it’s over the course of a conversation with them that you move towards a better understanding. A creative professional may have a bright office filled with primary colors and whiteboards, but it’s through collaborative discussion that you get inspired and flesh out your vision. The best assistance comes from your relationship with an assistant, too. Not transactions, but dialogue. (Maybe even dialogos!)

Ok, so how to make this concrete in terms of a chat interface, and how do you know if your design is working for people? I’m a communications professional, not a product person, so I can only gesture in a direction, but if there’s a way to quantify how well a query partitions a space of information, that could be a good place to start to figure out if your expert is engaging in effective dialogue and leading someone to a better understanding. To take a simple example, imagine someone asks for gift ideas. If you tell the salesperson at a fancy retail store that you’re looking for something for your mom, they’ll ask what occasion, because occasion is one of the main ways gift-giving is partitioned. A chat agent should afford carving up of the information space in a similar fashion. An LLM contains multitudes, so it doesn’t make sense to put ChatGPT in a tweed coat, but that’s not important. The important thing is that there’s process of dialogue through which understanding or inspiration or whatever is approached.

Outsourcing judgement, the AI edition

With every new technology, people try to do two things with it: communicate with others and rate people1. AI is no exception and HR and communications professionals should expect it to show up in two places: social media analysis and candidate screening.

Over the past 13 years, I’ve become an expert in many different ways to rate people, from the academic citation analysis tools on which universities spend millions to dating apps, and I’ve used a number of tools to monitor social media. The tools are dangerous to your business if you don’t know what you’re doing. You absolutely cannot assume social media is an accurate reflection of actual customer or consumer sentiment2. Social media monitoring tools will show you thousands of mentions from accounts with names like zendaya4eva and cooldude42 and the tools roll everything up into pretty dashboards that summarize the overall sentiment for you. There’s just one problem, and it’s that social media sentiment analysis sucks. Posts aren’t long enough for the algorithms to get enough signal and they can’t detect sarcasm or irony. You’re better off just looking at a sample of posts than using a sentiment dashboard. Analytics vendors know this and they’re working on building AI into the tools to make this better, but if you’re looking at social media sentiment because it’s easier to get than data on actual customers, you’re like the proverbial drunkard, looking for your keys where the light is better rather than where you actually lost them, and no amount of AI can fix that.

Candidate screening tools make some of the same promises. We can analyze the social media history of a candidate and flag areas of concern! I’ve written social media policies3 for several organizations and never have I ever seen a hiring or firing decision depend on a social media post that required a tool to flag. It’s very tempting to outsource our judgment. Thinking is hard and people aren’t always very good at it. You might think it’s better to have an objective process that eliminates conscious or unconscious bias4, but when you do this, you’re taking agency out of the hands of HR and the hiring manager. Hiring is a hard, multi-factorial decision and the last thing you want to do is outsource judgment here5 .

Progressive summarization of audio & video to retain more of what you hear in podcasts & watch in online lectures.

I read a lot, and in a lot of different places. Sometimes I’m just reading for fun, but when I’m reading something that I want to remember and be able to share with others or apply in my own life, I have found annotation and progressive summarization to be effective approaches. These approaches generally require text, but with the addition of a few services that mostly play nicely together, you can extend this approach to audio and video.


Accounts at Otter, Readwise, Hypothesis, and Roam.
The Hypothesis toolbar in your browser of choice (I like Firefox).

The Process

Let’s say you’re watching a lecture. Instead of trying to scribble notes in a notebook that you’ll later have to transcribe into Roam, you open Otter and let it start creating a text transcript. Take pictures or screenshots as you go, because Otter will be able to place those in the transcript according to timestamp. When it comes time to review, you use Hypothesis to annotate the Otter transcript, then you write some notes in Roam summarizing the insights from the lecture. If you’ve connected your Hypothesis account to Readwise, your highlights will be occasionally re-surfaced for you to review, which is a key step in making them actionable. There’s also a way to get spaced repetition in Roam.

The Setup

At Readwise, you have a bunch of options for connecting highlights. Enable Hypothesis and it will pull in all your highlights from Hypothesis, including the ones you’ve made on the Otter transcripts. You’ll find them under the Your Articles section at Readwise. You can review there and write up summaries in Roam, linking to other concepts and notes.

Why It Works

It works for me because I use Readwise as a sort of catch-all bucket for all the stuff I already read in so many places – Kindle, Twitter, and all the stuff I find via Twitter and shove into Pocket – and now I can also use Otter to convert things I listen to or watch into a form that Readwise can catch & periodically re-surface for me.


  1. When you select ‘view in article’ at Readwise, it will take you to Hypothesis, not Otter. Otter can generate sharing links to annotate or you can export the transcript and annotate it somewhere else that’s publicly accessible, which is probably the best course so you have your own backup.
  2. Making extensive use of all these services costs a little money. Readwise is a couple bucks a month, and Otter costs a little bit if you go over their free minutes. Roam likewise has a subscription plan. I personally believe that if you are going to invest a lot of time and effort into building a personal knowledge management system, you’re going to want that system to stick around and get better, so you’re going to hope they charge enough to do so, but I know even a couple bucks a month can be hard to come up with on a grad student budget, so here’s some options. Most YouTube videos have a transcript generated by Google, which may be of higher quality and won’t use up your Otter minutes. Also, Docdrop is a service from the founder of Hypothesis that facilitates annotation of all sorts of document types and can accept Youtube links.
  3. These services are all relatively new. There is a possibility that they go under or get bought by a company with a different privacy policy. Carefully inspect the privacy policies of all the services you use, consider not using services that don’t let you delete or get your content out easily (Evernote, for example), and keep your own backups. I will note that services getting acquired is not necessarily a bad thing. My company, Mendeley, was acquired by Elsevier 7-ish years ago and it’s still going strong. Also, services that charge money tend not to be as intrusive to your privacy.

How to create a Twitter list using the command line Twitter client, t, on Windows.

Hashtags are nice, but I wanted to be able to dynamically create and delete Twitter lists as well & I found the t Ruby client which allowed me to do this. Here’s how to get it going from a blank slate.

First, install Ruby with RubyInstaller. You’ll also need the DevKit.
Install the Ruby gem with gem install t. Then, register a application with Twitter. Next, authenticate the client with t authorize, which should take you to Twitter and ask you if you want to let the app you created access your account. You might have some issues with silly stuff getting the authentication to work, like outdated keys or whatever.

Then once you get to where you can tweet from your account, you’re ready to go. Create a new list with t list create [name of list]. The neat thing about having a command line client is that all the pipes and redirects and stuff work, so you can do cat listofhandles.txt | xargs t list add [nameoflist]. I think you have to have cygwin installed for xargs and cat to work, but I guess you could just do a batch file with a loop if you didn’t care about looking cool.

Cities make people unhappy

Glaeser, Gottlieb, and Ziv have a economics working paper in NBER (not Open Access, unfortunately) in which they report their assessment of the happiness of various cities across the US. Vox has a nice map which makes the point pretty well, but I had to grab the data and take a look for myself.

I got population data from the US Census website and, happily, the happiness study used the same metropolitan statistical area names as the census does, so it was a simple merge to visualize population vs. happiness.
adjusted happiness vs. population
Continue reading

I’ve come a long way.

When I started graduate school, the only thing I knew about publishing was how to write a blog post, and the only thing I knew about my library was that I hated their website. I didn’t know what open access was, and if it wasn’t in Pubmed, it pretty much didn’t exist for me. All I wanted to do was do my research and work in the lab. Back in 2004, I started work on my first paper and was exposed to the academic publishing process for the first time. For someone who was already familiar with blogging, the whole process made no sense to me. If I wanted to cite a fellow blogger, I could just link to their post with a short little a href=”http://theirblog.com/link-to-post”. I could anchor my link to a bit of text in my post, and they’d even get notified that I had linked to them. Likewise, I could subscribe to the RSS feed of their blog and get updates whenever they published. It was easy to see who was reading your stuff because Google Analytics was free (and even before that, there were plenty of log parsers). Why, then, would a group of people, among the smartest in the world, communicating potentially life-saving or economy altering information, use a system that was so inferior to that which people used to post pictures of what they had for dinner? Well, I was the only person in my lab, possibly the whole med school, who blogged, so no one understood what I was complaining about. I eventually found some colleagues online who felt similarly and we’d talk about why academic paper search sucked so bad, why reference management sucked so bad, and occasionally someone would build a new tool, which no one would use. The failure was usually chalked up to not having access to enough data by the developers and, if it required a critical mass of users, it was considered dead because academics wouldn’t take time away from research or writing papers to use it, because they had no incentive. So to me, the reason I had to use wonky, clunky, ugly tools and endure a long, tedious process to get published was down to these two things: lack of open data and impact factor chasing. As I dug into this, which helped me to procrastinate writing my qualifying exam, I learned that the lack of open data was primarily because academic publishing was mostly a for-profit endeavor and the entrenched interests had no desire to loosen their grip on their data. This was in the thick of the music industry’s “sue ’em all, let God sort ’em out” business strategy and newspapers were just starting to get worried, so the idea that you could do better providing a service instead of selling a product wasn’t really on people’s minds that much. Likewise, publishers didn’t do much to discourage impact factor chasing by scientists and there was literally nothing being done on measures of impact beyond citations. It seemed clear to me, then, that if I wanted to be freed from my drudgery, what I needed to do was to get more open metadata and try to establish something that could free research from the tyranny of the impact factor. I plodded along for a few more years, publishing a few more papers and supporting every new tool that arose which I thought had a reasonable chance of success, provided it could result in more open metadata, open access, and impact metrics. By the time I was done with my PhD, I knew that an academic career wasn’t in the cards for me.

I joined a biotech startup in San Diego later that year, early 2008, but later that year the company fell on hard times, along with the rest of the country. By early 2009 my time with the company was nearing an end, but I had still been following what was happening online and had begun to advocate for a new startup that seemed like it had a better idea than the boring old “social-network-for-scientists” clones that were popping up everywhere. As my involvement with the biotech tapered off, I was able to increase my involvement with Mendeley, eventually becoming part of the full-time US staff.

When I began to work for Mendeley, I was quite definitely aware of the possibility that they would get bought at some point. Nonetheless, I was excited to be able to play a role in helping them become a success. At the time my thought process was pretty simple: they were a non-ugly version of Endnote which also happens to be building a collection of research metadata that they can make available under an open license, and they can provide a measure of impact that is distinct from citations. Freedom at last!
Now 4 years later, there’s a lot I have to be pleased about when I look back. I presented one of the few for-profit business use cases for open access(PDF) to the US Office of Science and Technology Policy. We have ~90M documents available via an API with a permissive CC-BY license. We’re one of the leading contributors of data to the growing #altmetrics movement.

Now my career is entering another phase. I’m going to leave all the “we’re so excited” stuff for the official announcement, but I think Mendeley has gotten to a size where it’s no longer a startup, and smart people are predicting open access will be a reality soon. As Victor notes, we could have carried on, but it would have taken longer for us to get to where we needed to be and there’s no guarantee we would have made it. Springer + Papers or Nature + Readcube could put more marketing muscle behind their apps and neither of them have as open a philosophy as we do. What about Zotero? I think if Zotero was going to change things, they would already have done so, but maybe they could team up with the Digital Public Library of America or Center for Open Science.

I do think there’s a possibility that we could do some good as part of Elsevier. Having talked with tons of people, from the CEO of Elsevier on down, I am now convinced that they want to be a part of the changes, instead of trying to fight them off like the recording companies did. There are and will be a couple competing narratives: They bought us to bury us, we got paid tons of money so we said, “Fuck Open Access”, etc. This is going to be put in the context of Google Reader shutting down, Delicious “sunsetting”, etc. However, I’m not personally getting a pile of money from all this, and I never would have stayed unless I was convinced that they legitimately want to be part of the change to an open access publishing system.

So I’ll be staying with Mendeley. I have been told that my day to day job will remain the same, and that my voice is valued. I trust my friends to keep me honest and to call out bullshit when they see it. I’m grateful to have had this opportunity, over the past 9 years, to not only be a voice for a better way of doing and communicating research, but to be a pair of hands. I’ll learn everything I can about working within Elsevier and, after a couple years, if we don’t finally end up with freely available academic paper metadata and more Google Analytics-like research impact information, it won’t be because I didn’t try my best. That’s my promise and I expect – need – anyone who’s reading this to hold me to it.

Other posts:
TC: http://techcrunch.com/2013/04/08/confirmed-elsevier-has-bought-mendeley-for-69m-100m-to-expand-open-social-education-data-efforts/
Q&A: http://blog.mendeley.com/press-release/qa-team-mendeley-joins-elsevier/
SciAm: http://blogs.scientificamerican.com/information-culture/2013/04/09/elsevier-giant-for-profit-scholarly-publisher-buys-mendeley-free-citation-manager-and-discovery-tool/
Jason Hoyt: http://enjoythedisruption.com/post/47527556151/my-thoughts-on-mendeley-elsevier-why-i-left-to-start