Where’s the art in artificial intelligence?

“In the beginning was the Word, and the Word was with God, and the Word was God.” – John 1:1 KJV

When I think about creative works generated by generative AI, I think about golems. According to legend, Rabbi Yehudah Loew created a automaton from clay to protect the Jews of Prague by animating a soulless lump of clay with the Word of God. Like the Rabbi of folklore, when we write or produce a work of art, we like to think we’re doing something that imbues our work with soul. People say an image created by GenAI lacks the “creative spark”, but what does this really mean? What distinguishes an illustration by Sendak or Doré or Geiger from a similar work created by prompting Midjourney?

Imagine Hugging Face puts on an event where they put a generated illustration next to a work from a famous illustrator and ask an art critic to point out differences. The art critic will get deep in the weeds and point out various differences, then they’ll ask another art critic, who’ll emphasize different distinctions, then another will identify still others as crucial and then Hugging Face will say, “Aha! The critics couldn’t reproducibly find differences and therefore there really are none!” Lots of breathless media headlines and dunking on critics will ensue, unless…
Continue reading

The big thing LLM interfaces are missing: dialogue

Daniel Tunkelang brought my attention to a lovely rant from Amelia Wattenberger about the lack of affordances in chat interfaces and I began to wonder what it means for a chat interface to have an affordance. In other words, what’s the obvious thing to do with a chat interface like it’s obvious that a glove can be put on a hand to protect it? The obvious thing to do is to have a conversation, but people building products miss the most important thing about conversation: it’s a process, not a transaction. Product people think the obvious thing to do with an LLM chat interface is to ask questions and get answers and their critics quickly respond that the answers are merely plausible sentences and any truth is incidental. This whole thing has become tiresome and no one is putting their finger on the heart of the issue to move the conversation forwards. Taking a step back to consider how a conversation with a subject matter expert goes quickly reveals the confusion here. How does a conversation with a subject matter expert go? You start by asking bad questions and you get questions from them in return and then you ask better ones and through a process of back-and-forth dialogue, you end up with a better understanding that is often very different from what you thought you were after when you started the conversation, as this person seeking help with a regular expression to parse HTML famously discovered. It’s the process of dialogue that’s the important thing here, so thinking of the exchange in terms of a transaction is just a confused conceptual model. To make an LLM product that really delivers value through a chat interface, you have to provide dialogue as an affordance.

There are lots of different kinds of conversations that people have. Do you approach the LLM as an all-knowing oracle or a creative companion or a virtual assistant or something else? What needs to be apparent when someone enters a conversation with an LLM? Looking again at how a conversation is entered with an oracle or a creative companion or an assistant provides some hints. An expert may present themselves as having a degree, a wizened face, a tweed coat with elbow patches, or an office in a old, ivy-covered hall of knowledge, but it’s over the course of a conversation with them that you move towards a better understanding. A creative professional may have a bright office filled with primary colors and whiteboards, but it’s through collaborative discussion that you get inspired and flesh out your vision. The best assistance comes from your relationship with an assistant, too. Not transactions, but dialogue. (Maybe even dialogos!)

Ok, so how to make this concrete in terms of a chat interface, and how do you know if your design is working for people? I’m a communications professional, not a product person, so I can only gesture in a direction, but if there’s a way to quantify how well a query partitions a space of information, that could be a good place to start to figure out if your expert is engaging in effective dialogue and leading someone to a better understanding. To take a simple example, imagine someone asks for gift ideas. If you tell the salesperson at a fancy retail store that you’re looking for something for your mom, they’ll ask what occasion, because occasion is one of the main ways gift-giving is partitioned. A chat agent should afford carving up of the information space in a similar fashion. An LLM contains multitudes, so it doesn’t make sense to put ChatGPT in a tweed coat, but that’s not important. The important thing is that there’s process of dialogue through which understanding or inspiration or whatever is approached.

Outsourcing judgement, the AI edition

With every new technology, people try to do two things with it: communicate with others and rate people1. AI is no exception and HR and communications professionals should expect it to show up in two places: social media analysis and candidate screening.

Over the past 13 years, I’ve become an expert in many different ways to rate people, from the academic citation analysis tools on which universities spend millions to dating apps, and I’ve used a number of tools to monitor social media. The tools are dangerous to your business if you don’t know what you’re doing. You absolutely cannot assume social media is an accurate reflection of actual customer or consumer sentiment2. Social media monitoring tools will show you thousands of mentions from accounts with names like zendaya4eva and cooldude42 and the tools roll everything up into pretty dashboards that summarize the overall sentiment for you. There’s just one problem, and it’s that social media sentiment analysis sucks. Posts aren’t long enough for the algorithms to get enough signal and they can’t detect sarcasm or irony. You’re better off just looking at a sample of posts than using a sentiment dashboard. Analytics vendors know this and they’re working on building AI into the tools to make this better, but if you’re looking at social media sentiment because it’s easier to get than data on actual customers, you’re like the proverbial drunkard, looking for your keys where the light is better rather than where you actually lost them, and no amount of AI can fix that.

Candidate screening tools make some of the same promises. We can analyze the social media history of a candidate and flag areas of concern! I’ve written social media policies3 for several organizations and never have I ever seen a hiring or firing decision depend on a social media post that required a tool to flag. It’s very tempting to outsource our judgment. Thinking is hard and people aren’t always very good at it. You might think it’s better to have an objective process that eliminates conscious or unconscious bias4, but when you do this, you’re taking agency out of the hands of HR and the hiring manager. Hiring is a hard, multi-factorial decision and the last thing you want to do is outsource judgment here5 .