Take an exclusive peek behind the Dovetail curtain to learn how we built our AI features straight from the source—our Software Engineer, Anna Zhirnova.
I work on the Dovetail team that helped build Channels. I’d like to share with you how we built this feature and also try to demystify AI.
Magic is the moniker we use for AI and machine-learning features, and there’s a good reason for this.
AI, and machine learning, is all around us. We know it’s superpowerful, but it’s also complicated and hard to understand.
I’m going to break it down and give you some insights into fundamental concepts that power much of today’s technology.
Ask questions of your data, get AI-suggested highlights from customer calls and more with the new magic features.
Get Dovetail freeSay you want to build something—a shiny new car, an airline app, or a product that takes your support tickets and converts them into themes to help you extract insights faster.
It’s easy to think AI is this all-powerful algorithm that you give your data to, tell it what to do, and it spits out the right result. But that’s not the case.
AI doesn’t build complete product solutions yet. Rather than being a whole car, it’s more like a collection of car parts. Each part has its specialization and is good at what it does.
That’s a machine-learning algorithm. It’s up to the engineers to put the parts together in a way that makes sense and results in a product that will bring value to your customers.
First of all, let’s look at the difference between machine learning and generative AI. You would have heard both terms often, I’m sure, and they’re used interchangeably sometimes. But the key difference is what they focus on.
Machine learning finds patterns in existing data and applies them to new data to generate predictions. For example, machine-learning algorithms focus on giving you music recommendations or finding the best search results.
Generative AI is what every billboard in San Francisco means by “AI”. It still uses the core machine-learning algorithms, but it focuses on creating new data, like a school report, a travel itinerary or editing an image.
As a Channels engineer, that’s what I’m most interested in, and that’s what I’ll use as an example.
Firstly, let’s go through what Channels are. We take your data, support tickets, and NPS feedback or product reviews, and we extract themes.
We show you how these themes change over time and provide a summary of a theme, and you can dig into individual data points.
And we do that continuously. When new data comes in from Front or Intercom, it will go into your channel.
Your data is like a box of mixed Lego bricks that are different shapes and sizes. There’s not much of a pattern to it. That data isn’t very useful in itself. If you can’t see patterns, it doesn’t make sense.
The job of Channels is to help you make sense of it.
So how do we do that?
Before we can do anything with a piece of data, we need to translate it into a language that a machine-learning algorithm can understand.
Take a simple Lego brick. It has a few key characteristics, for example, shape and size. A piece of data can be a few lines of text or a whole paragraph. We need to figure out how to encode those key characteristics in a way that an algorithm can understand. This is called generating and embedding.
So how do we actually do it?
A Lego piece has characteristics of size and color. We’ll create a two-dimensional space in which one axis is the size, the other axis is the color, and we’ll place the Lego piece in the correct spot in that two-dimensional space.
Text is much more complicated than Lego. Instead of a two-dimensional space, you end up with something multidimensional that is hard to visualize. But the core concept is the same. You place a piece of data in a space so you can compare it with other pieces. And you pass the coordinates of that piece to other algorithms so you can do useful stuff with that data.
Knowing where a piece of data sits is not useful in itself. But it allows us to compare pieces of data and split them into buckets. This is called clustering, which is another machine-learning problem that’s pretty common.
You have your multidimensional space in which your pieces of data are floating around.
The job of a clustering algorithm is to find pieces that are close together. There are many, many parameters you can use to pick the right algorithm for the job.
That is what our engineers do. We test a bunch of different algorithms and pick the right one for the task.
What does that mean in practice?
Clustering allows us to extract themes from your data. However, at that point, they’re not particularly useful because we don’t know what’s in the theme. Think of it as an unlabeled box of Lego pieces. You don’t know what’s in it, and you have to go through all the pieces to figure out their value and what you can do with them.
Next, we need to summarize the themes and label those Lego boxes. This is where generative AI comes in.
You can ask ChatGPT to give you a summary of a theme. We take all the data points, all those vectors we created, send them to ChatGPT, and ask for a summary. Once we have the summary, we can tell you what’s in a given theme and help you make sense of it.
But there’s one final piece of the puzzle missing. What do we do with new data?
When new data comes in, we need to figure out which theme it belongs to. This is another core machine-learning problem called classification, and its algorithms are similar to clustering. They also look at where data sits in the space and figure out how to place it in the right bucket.
Engineers will spend a long time evaluating different classifiers and seeing how they work for your data. That is what lets us put updates into the channel. When a new piece of data comes in from Front or Intercom, we know where it goes.
Hopefully, this gives you a better idea of how Channels work. We take your data, cluster it, summarize what’s in the clusters to give you useful themes, and then classify new data when it comes in.
This is a high-level description of how Channels work; there’s much more under the hood. But I hope that gives you a bit of insight into how AI works, and specifically how Channels work.
I want to share some lessons we learned when building Channels.
Staying pragmatic is the sixth unofficial value of Dovetail, and we take it seriously.
It was important for us in the Channels team to build the product in such a way that we could get it into customers’ hands as soon as possible. We also wanted to make it flexible so we could swap out all the AI algorithms under the hood without massively changing your experience.
Why was this important?
There is a very common issue with algorithms and machine learning, namely, over-training it on a specific set of data. Say, for example, we’ve trained it on a sample data set and it works great, but then customers start using it and it doesn’t make sense. It’s not that good at figuring out different types of data.
Hopefully, Channels allow us to swap out different algorithms without affecting your experience.
The next one is listening to your customers. I know this is cringy and obvious. However, I’d like to share related lessons from Channels when we built it initially.
The experience mapped well to an engineer’s understanding of what Channels look like. So, to us, it made perfect sense. But as soon as we got it into alpha customers’ hands to test it, we started getting feedback that it was too convoluted; there were too many steps.
So we simplified it, and the Channels you see now hopefully provide a better experience for everyone involved.
I don’t need to tell you how fast AI is moving. As engineers and product designers, we have to keep track of the changes that pop up to stay ahead of the curve and provide good experiences.
As an example, in the last few weeks, our team in Sydney has changed almost the whole underlying model of Channels without really changing the user experience. That will let us have better themes and make it easier to build Channels.
Channels is a new way to transform feedback into powerful insights. Uncover trends, predict issues, and build products people love.
Join waitlistOne day, AI might provide complete product solutions, and we’ll just need to tell it what to do. That day is not there yet, and our jobs are still our jobs, thankfully.
Until then, it is up to us—the product engineers, product designers, everyone who works in product—to figure out how to use those AI algorithms that are car parts to build the product our customers want to use.
Editor’s note: This article is a condensed overview of Anna Zhirnova's chat in the Discover Dovetail room at Insight Out 2024.
Get started for free
or
By clicking “Continue with Google / Email” you agree to our User Terms of Service and Privacy Policy