The big thing LLM interfaces are missing: dialogue

Daniel Tunkelang brought my attention to a lovely rant from Amelia Wattenberger about the lack of affordances in chat interfaces and I began to wonder what it means for a chat interface to have an affordance. In other words, what’s the obvious thing to do with a chat interface like it’s obvious that a glove can be put on a hand to protect it? The obvious thing to do is to have a conversation, but people building products miss the most important thing about conversation: it’s a process, not a transaction. Product people think the obvious thing to do with an LLM chat interface is to ask questions and get answers and their critics quickly respond that the answers are merely plausible sentences and any truth is incidental. This whole thing has become tiresome and no one is putting their finger on the heart of the issue to move the conversation forwards. Taking a step back to consider how a conversation with a subject matter expert goes quickly reveals the confusion here. How does a conversation with a subject matter expert go? You start by asking bad questions and you get questions from them in return and then you ask better ones and through a process of back-and-forth dialogue, you end up with a better understanding that is often very different from what you thought you were after when you started the conversation, as this person seeking help with a regular expression to parse HTML famously discovered. It’s the process of dialogue that’s the important thing here, so thinking of the exchange in terms of a transaction is just a confused conceptual model. To make an LLM product that really delivers value through a chat interface, you have to provide dialogue as an affordance.

There are lots of different kinds of conversations that people have. Do you approach the LLM as an all-knowing oracle or a creative companion or a virtual assistant or something else? What needs to be apparent when someone enters a conversation with an LLM? Looking again at how a conversation is entered with an oracle or a creative companion or an assistant provides some hints. An expert may present themselves as having a degree, a wizened face, a tweed coat with elbow patches, or an office in a old, ivy-covered hall of knowledge, but it’s over the course of a conversation with them that you move towards a better understanding. A creative professional may have a bright office filled with primary colors and whiteboards, but it’s through collaborative discussion that you get inspired and flesh out your vision. The best assistance comes from your relationship with an assistant, too. Not transactions, but dialogue. (Maybe even dialogos!)

Ok, so how to make this concrete in terms of a chat interface, and how do you know if your design is working for people? I’m a communications professional, not a product person, so I can only gesture in a direction, but if there’s a way to quantify how well a query partitions a space of information, that could be a good place to start to figure out if your expert is engaging in effective dialogue and leading someone to a better understanding. To take a simple example, imagine someone asks for gift ideas. If you tell the salesperson at a fancy retail store that you’re looking for something for your mom, they’ll ask what occasion, because occasion is one of the main ways gift-giving is partitioned. A chat agent should afford carving up of the information space in a similar fashion. An LLM contains multitudes, so it doesn’t make sense to put ChatGPT in a tweed coat, but that’s not important. The important thing is that there’s process of dialogue through which understanding or inspiration or whatever is approached.