Where’s the art in artificial intelligence?

“In the beginning was the Word, and the Word was with God, and the Word was God.” – John 1:1 KJV

When I think about creative works generated by generative AI, I think about golems. According to legend, Rabbi Yehudah Loew created a automaton from clay to protect the Jews of Prague by animating a soulless lump of clay with the Word of God. Like the Rabbi of folklore, when we write or produce a work of art, we like to think we’re doing something that imbues our work with soul. People say an image created by GenAI lacks the “creative spark”, but what does this really mean? What distinguishes an illustration by Sendak or Doré or Geiger from a similar work created by prompting Midjourney?

Imagine Hugging Face puts on an event where they put a generated illustration next to a work from a famous illustrator and ask an art critic to point out differences. The art critic will get deep in the weeds and point out various differences, then they’ll ask another art critic, who’ll emphasize different distinctions, then another will identify still others as crucial and then Hugging Face will say, “Aha! The critics couldn’t reproducibly find differences and therefore there really are none!” Lots of breathless media headlines and dunking on critics will ensue, unless…
Continue reading

Do we really want a global town square?

In 2005, Lawrence Summers, president of Harvard, former US Secretary of the Treasury and Chief Economist of the World Bank, gave a speech at NBER where he discussed some employment data. He said he was offering a positive, observational view and not a judgemental or normative view.

…the data will, I am confident, reveal that Catholics are substantially underrepresented in investment banking […], that white men are very substantially underrepresented in the National Basketball Association, and that Jews are very substantially underrepresented in farming and in agriculture. These are all phenomena in which one observes underrepresentation and I think it’s important to try to think systematically and clinically about the reasons for underrepresentation.

He thought that these prefatory statements, and that he was attempting to be a little provocative and not speaking on behalf of Harvard, would have opened up a thoughtful discussion of various factors leading to under-representation of women in tenured positions in science at top universities. He could not have misjudged his audience more poorly.1 Shortly thereafter, he resigned as Harvard’s president ahead of a no-confidence vote by faculty.

A few years before the fateful speech, during his tenure as Harvard president, a website created by a Harvard student called Facemash almost got its creator, Mark Zuckerberg, expelled. Two years later, Twitter was founded. Looking back from 2023, we have seen many intellectuals and public figures stumble on these platforms, and their mistakes generally fall into the same category of error. It’s one anyone in a public speaking role must learn to avoid.

When connecting audio equipment, you have to be careful to avoid an impedance mismatch to avoid lifeless instruments, digital glitches, and fried amps. In public speaking, you have to be careful to avoid a stage mismatch to avoid lifeless audiences, digital mobs, and fried jobs. I’m not talking about the thing the podium is on; I’m talking about Robert Kegan’s stages of human development. According to this framework, people go through at least 3 and up to 52 stages of development over our lives.

Briefly:

Stage 13: A very young child can sorta control their motor skills, but their impulses and perceptions aren’t things they think about. A reflection of a baby in a mirror will be treated as real.
Stage 24: Impulses and perceptions can be thought about, but needs and desires feel objectively real. A child in this stage can understand why they’re hungry and do something about it, but won’t question their interest in playing a game.
Stage 35: In this stage, interests and desires can be examined, and, sadly, we lose our ability to play like children. We vibe with people and are emotionally engaged, but we might also panic or yell at someone in anger.
Stage 46: We understand how the influences of those around us shape our identity and yet sometimes do things that don’t align with that. We learn the rules of society and how to fit into systems. We can keep a cool head when others are panicking, but might suppress emotions or feel disconnected. A judge would use Stage 4 thinking to assign a criminal penalty according to the law, even when public outcry demands a harsh sentence.
Stage 57: We understand many different kinds of systems and can reason about how different systems would work in a given situation. We have many different identities and don’t feel like any particular one is the “true self”. Emotions are welcomed, even when they’re difficult, but what behavior results is not deterministic. A judge sparing a young mother from jail may be using stage 5 thinking.

Unless there are people having tantrums and playing carefree in the aisles, you can expect most people are primarily using a stage 3 mode of relating to the world and to your speech. Here’s the thing: While a comment made from a Stage 4 perspective can appear heartless when viewed from a Stage 3 perspective, a comment made from Stage 5 can look a partisan Stage 3 comment. Most viral outrage comes from this kind of stage mismatch. An ironic joke about AIDS not being just an African thing requires an ability to consider the different identities that make up a person, characteristic of stage 5. Heard from a stage 3 perspective, it’s enraging. If you are an intellectual, you probably spend more time than most around people relating to the world using strategies from stages 4 and 5, and you will get used to saying complex things that are nevertheless understood. This is where people mess up. They think they’re in a cozy little salon, but they’re in the town square.

Twitter and Facebook have been called our “global town square”. Those who coined the phrase no doubt meant to vaguely gesture at a bucolic vision and egalitarian ideal of free speech, but I think the metaphor is more apt than that.

Norman Rockwell's Freedom of Speech painting, showing a middle-class white guy standing up in a meeting, while various other people look on approvingly.

You are not here.


A picture of Times Square in NYC

You are here.

As in Times Square, you are surrounded by many people, not all of whom you can see, not all of whom have your best interests in mind, and not all of whom can avoid lashing out if something you say triggers jealousy or rage. In the global Times Square, you don’t just produce content, you are content, and since the invention of social media, you are always in the global Times Square.

The big thing LLM interfaces are missing: dialogue

Daniel Tunkelang brought my attention to a lovely rant from Amelia Wattenberger about the lack of affordances in chat interfaces and I began to wonder what it means for a chat interface to have an affordance. In other words, what’s the obvious thing to do with a chat interface like it’s obvious that a glove can be put on a hand to protect it? The obvious thing to do is to have a conversation, but people building products miss the most important thing about conversation: it’s a process, not a transaction. Product people think the obvious thing to do with an LLM chat interface is to ask questions and get answers and their critics quickly respond that the answers are merely plausible sentences and any truth is incidental. This whole thing has become tiresome and no one is putting their finger on the heart of the issue to move the conversation forwards. Taking a step back to consider how a conversation with a subject matter expert goes quickly reveals the confusion here. How does a conversation with a subject matter expert go? You start by asking bad questions and you get questions from them in return and then you ask better ones and through a process of back-and-forth dialogue, you end up with a better understanding that is often very different from what you thought you were after when you started the conversation, as this person seeking help with a regular expression to parse HTML famously discovered. It’s the process of dialogue that’s the important thing here, so thinking of the exchange in terms of a transaction is just a confused conceptual model. To make an LLM product that really delivers value through a chat interface, you have to provide dialogue as an affordance.

There are lots of different kinds of conversations that people have. Do you approach the LLM as an all-knowing oracle or a creative companion or a virtual assistant or something else? What needs to be apparent when someone enters a conversation with an LLM? Looking again at how a conversation is entered with an oracle or a creative companion or an assistant provides some hints. An expert may present themselves as having a degree, a wizened face, a tweed coat with elbow patches, or an office in a old, ivy-covered hall of knowledge, but it’s over the course of a conversation with them that you move towards a better understanding. A creative professional may have a bright office filled with primary colors and whiteboards, but it’s through collaborative discussion that you get inspired and flesh out your vision. The best assistance comes from your relationship with an assistant, too. Not transactions, but dialogue. (Maybe even dialogos!)

Ok, so how to make this concrete in terms of a chat interface, and how do you know if your design is working for people? I’m a communications professional, not a product person, so I can only gesture in a direction, but if there’s a way to quantify how well a query partitions a space of information, that could be a good place to start to figure out if your expert is engaging in effective dialogue and leading someone to a better understanding. To take a simple example, imagine someone asks for gift ideas. If you tell the salesperson at a fancy retail store that you’re looking for something for your mom, they’ll ask what occasion, because occasion is one of the main ways gift-giving is partitioned. A chat agent should afford carving up of the information space in a similar fashion. An LLM contains multitudes, so it doesn’t make sense to put ChatGPT in a tweed coat, but that’s not important. The important thing is that there’s process of dialogue through which understanding or inspiration or whatever is approached.

Outsourcing judgement, the AI edition

With every new technology, people try to do two things with it: communicate with others and rate people1. AI is no exception and HR and communications professionals should expect it to show up in two places: social media analysis and candidate screening.

Over the past 13 years, I’ve become an expert in many different ways to rate people, from the academic citation analysis tools on which universities spend millions to dating apps, and I’ve used a number of tools to monitor social media. The tools are dangerous to your business if you don’t know what you’re doing. You absolutely cannot assume social media is an accurate reflection of actual customer or consumer sentiment2. Social media monitoring tools will show you thousands of mentions from accounts with names like zendaya4eva and cooldude42 and the tools roll everything up into pretty dashboards that summarize the overall sentiment for you. There’s just one problem, and it’s that social media sentiment analysis sucks. Posts aren’t long enough for the algorithms to get enough signal and they can’t detect sarcasm or irony. You’re better off just looking at a sample of posts than using a sentiment dashboard. Analytics vendors know this and they’re working on building AI into the tools to make this better, but if you’re looking at social media sentiment because it’s easier to get than data on actual customers, you’re like the proverbial drunkard, looking for your keys where the light is better rather than where you actually lost them, and no amount of AI can fix that.

Candidate screening tools make some of the same promises. We can analyze the social media history of a candidate and flag areas of concern! I’ve written social media policies3 for several organizations and never have I ever seen a hiring or firing decision depend on a social media post that required a tool to flag. It’s very tempting to outsource our judgment. Thinking is hard and people aren’t always very good at it. You might think it’s better to have an objective process that eliminates conscious or unconscious bias4, but when you do this, you’re taking agency out of the hands of HR and the hiring manager. Hiring is a hard, multi-factorial decision and the last thing you want to do is outsource judgment here5 .

Progressive summarization of audio & video to retain more of what you hear in podcasts & watch in online lectures.

I read a lot, and in a lot of different places. Sometimes I’m just reading for fun, but when I’m reading something that I want to remember and be able to share with others or apply in my own life, I have found annotation and progressive summarization to be effective approaches. These approaches generally require text, but with the addition of a few services that mostly play nicely together, you can extend this approach to audio and video.

Prerequisites

Accounts at Otter, Readwise, Hypothesis, and Roam.
The Hypothesis toolbar in your browser of choice (I like Firefox).

The Process

Let’s say you’re watching a lecture. Instead of trying to scribble notes in a notebook that you’ll later have to transcribe into Roam, you open Otter and let it start creating a text transcript. Take pictures or screenshots as you go, because Otter will be able to place those in the transcript according to timestamp. When it comes time to review, you use Hypothesis to annotate the Otter transcript, then you write some notes in Roam summarizing the insights from the lecture. If you’ve connected your Hypothesis account to Readwise, your highlights will be occasionally re-surfaced for you to review, which is a key step in making them actionable. There’s also a way to get spaced repetition in Roam.

The Setup

At Readwise, you have a bunch of options for connecting highlights. Enable Hypothesis and it will pull in all your highlights from Hypothesis, including the ones you’ve made on the Otter transcripts. You’ll find them under the Your Articles section at Readwise. You can review there and write up summaries in Roam, linking to other concepts and notes.

Why It Works

It works for me because I use Readwise as a sort of catch-all bucket for all the stuff I already read in so many places – Kindle, Twitter, and all the stuff I find via Twitter and shove into Pocket – and now I can also use Otter to convert things I listen to or watch into a form that Readwise can catch & periodically re-surface for me.

Caveats

  1. When you select ‘view in article’ at Readwise, it will take you to Hypothesis, not Otter. Otter can generate sharing links to annotate or you can export the transcript and annotate it somewhere else that’s publicly accessible, which is probably the best course so you have your own backup.
  2. Making extensive use of all these services costs a little money. Readwise is a couple bucks a month, and Otter costs a little bit if you go over their free minutes. Roam likewise has a subscription plan. I personally believe that if you are going to invest a lot of time and effort into building a personal knowledge management system, you’re going to want that system to stick around and get better, so you’re going to hope they charge enough to do so, but I know even a couple bucks a month can be hard to come up with on a grad student budget, so here’s some options. Most YouTube videos have a transcript generated by Google, which may be of higher quality and won’t use up your Otter minutes. Also, Docdrop is a service from the founder of Hypothesis that facilitates annotation of all sorts of document types and can accept Youtube links.
  3. These services are all relatively new. There is a possibility that they go under or get bought by a company with a different privacy policy. Carefully inspect the privacy policies of all the services you use, consider not using services that don’t let you delete or get your content out easily (Evernote, for example), and keep your own backups. I will note that services getting acquired is not necessarily a bad thing. My company, Mendeley, was acquired by Elsevier 7-ish years ago and it’s still going strong. Also, services that charge money tend not to be as intrusive to your privacy.

How to create a Twitter list using the command line Twitter client, t, on Windows.

Hashtags are nice, but I wanted to be able to dynamically create and delete Twitter lists as well & I found the t Ruby client which allowed me to do this. Here’s how to get it going from a blank slate.

First, install Ruby with RubyInstaller. You’ll also need the DevKit.
Install the Ruby gem with gem install t. Then, register a application with Twitter. Next, authenticate the client with t authorize, which should take you to Twitter and ask you if you want to let the app you created access your account. You might have some issues with silly stuff getting the authentication to work, like outdated keys or whatever.

Then once you get to where you can tweet from your account, you’re ready to go. Create a new list with t list create [name of list]. The neat thing about having a command line client is that all the pipes and redirects and stuff work, so you can do cat listofhandles.txt | xargs t list add [nameoflist]. I think you have to have cygwin installed for xargs and cat to work, but I guess you could just do a batch file with a loop if you didn’t care about looking cool.

Cities make people unhappy

Glaeser, Gottlieb, and Ziv have a economics working paper in NBER (not Open Access, unfortunately) in which they report their assessment of the happiness of various cities across the US. Vox has a nice map which makes the point pretty well, but I had to grab the data and take a look for myself.

I got population data from the US Census website and, happily, the happiness study used the same metropolitan statistical area names as the census does, so it was a simple merge to visualize population vs. happiness.
adjusted happiness vs. population
Continue reading