Subscribe
Treat it like a therapist: How to use AI to create more actionable insights
And...how do you make that actionable?
Published
14 September 2023
ServiceNow’s Jeanette Fuccella investigates how generative AI can help improve our insights. The hot tip? Treat it like a therapist.
One of the core functions of our role as researchers is to uncover and communicate actionable insights that help drive roadmap decisions. 
This might sound like an easy task, but anyone who has done it knows how challenging it is.
Putting aside how much skill is involved in just parsing out research goals that align with business objectives across a diverse group of highly opinionated stakeholders—and putting aside the craft of designing studies that effectively extract the necessary data (balancing rigor and a multitude of resource constraints)—identifying those nuggets of novel observation and then communicating them in a way that:
  1. Will actually get read, 
  2. Will be understood, 
  3. Acknowledge a broader context, 
  4. Are specific enough to be actionable, and 
  5. Are broad enough to be durable over time
It’s a hugely time-consuming task. And that doesn’t even take into account the cataloging and easy retrieval of all that hard work. 
But, since AI is the answer to everything that ails us these days, it seems only appropriate to explore how we might use it to streamline and improve this most basic—but essential—research task.
It also seemed appropriate to go full-on meta with this one and leverage AI to understand how we might leverage AI to build more actionable insights. 
In keeping with the theme, the lessons I learned were also more “meta” than actionable (but don’t worry, I learned some actionable lessons, too).
Necessary caveat! To be clear, I am not a security expert or an expert in AI, but I know enough to err on the side of caution. Unless you are using either company proprietary AI models or an “Enterprise” plan or the AI provided via a purchased (and vetted) tool, there’s probably very little you can safely do without risking exposing confidential data. 
The whole point of identifying actionable insights is to uncover novel observations from data that lead to innovations. Sharing those novel observations—or worse, the source data from which you derive those observations—with an LLM learning from your data inputs is likely a security or privacy breach. The power of AI is profound, but be sure you are leveraging it safely.
Focus on the journey, not just the destination
Sure, GenAI can provide shockingly good answers to almost any question. But so can Google. What a conversational assistant can do that Google can’t do is engage you in a reflective dialogue that helps deepen your understanding of a topic. Take my example—I started my research by asking ChatGPT a big loaded question head-on: 
How might product teams leverage AI to build actionable insights that drive product roadmap decisions more effectively? 
ChatGPT’s response was a mishmash of marketing buzzwords, research methodologies, and vague pie-in-the-sky solutions. The response wasn’t wrong; it just wasn’t specific enough to be actionable. 
So I decided to go back to basics and asked:
How would you define a UX research insight?
This time, I received a response that would charm even the most cynical grade school English teacher. It included a short intro, bulleted criteria and descriptions, a handful of examples, and a lovely concluding paragraph to drive it all home. 
Honestly, it was a pretty solid response. Maybe even annoyingly so. Okay, time to give it some nuance. This time, I asked:
What makes a UX research insight actionable?  
Another perfectly formatted response with some solid content, but this time, as I was scanning through the criteria, I came upon: 
This caused me to pause and consider whether an insight involves only a description of the problem or if it includes a recommended solution as well. 
In other words, what I experienced as the real value of using AI, or conversational AI, wasn’t in asking a question and getting an answer. It was being in dialogue with the assistant to clarify my own thoughts.
We all know this by now, but ChatGPT is no genie in a bottle. It’s, in essence, a mirror that reflects back to us what “we” (the collective web-hood) have told it. 
From Socrates to my therapist, this type of reflective introspection is an effective mechanism for garnering deeper understanding, illuminating inconsistencies, and identifying hypocrisies. All of these are necessary steps toward identifying actionable insights from data. 
In other words, dialoguing with conversational AI about a research insight will likely yield a more durable, novel, and actionable insight.
Diverge and converge
Writing effective and actionable insights is as much art as science. 
The goal is to distill countless hours of research into as short a description as possible—often only a sentence or two. 
As the saying goes, shorter stories take longer to write, and this is definitely true for writing insights.
A conversational assistant can be your best friend if you need help distilling a paragraph of text into a concise insight or for offering ways to rephrase an existing insight.
As I was trying this out myself, it occurred to me that I could ask ChatGPT to provide me with even more value than mere editorial assistance. 
I could ask for suggestions that adhere to the criteria defined in our previous conversation for what constitutes an actionable insight (User-Centered, Contextual, Actionable, Novel, Impactful, Root Cause Identification, Data-Informed). 
(Side note: is it possible that Clippy and ChatGPT are related?)
For my experiment, I knew I couldn’t use an actual insight from my work (I enjoy being employed). So, I turned to this article by Etienne Fang that I’ve referred to many times for help defining an insights framework. Etienne shares a “before” and “after” of an insight from her team. I thought it might be interesting to see how well ChatGPT can transform the “before” version using just our framework criteria:
Uber’s “before” insight:
“Some Uber Bus drivers stop receiving requests one hour before their shift ends so they can finish on time.”
Which ChatGPT, with the enthusiasm of its long-lost paper clip cousin (“Certainly!”), converted into the following:
While the first sentence is essentially a restatement of the original, ChatGPT identified that the original insight wasn’t “Actionable.” 
To meet this criteria, it added the second sentence. While perhaps not perfect, adding the second sentence might be a flag to the researcher that they haven’t quite nailed the essence of the insight.
Identifying hypotheses
Creating truly actionable insights often requires combining data sets via multi-phased research plans. Engaging a conversational assistant to aid in identifying hypotheses throughout this process can help identify new opportunities for exploration, which will further refine the insight.
Take, for example, the above insight about Uber bus drivers. We know they aren’t taking ride requests about an hour before the end of their shift, but—based on the insight as written—we don’t know why. A conversational assistant might be able to help offer new hypotheses to explore.  
Each of these hypotheses can then be evaluated (whether with existing data or new research) to improve the usefulness of the associated insight. 
The next exciting moment will be when the conversational AI has access to our proprietary data, allowing us to organically and iteratively refine our insights based on what’s already known. Suggested actions will be rooted in prior knowledge, improving over time based on results from previous experiments. These repos-of-the-future will allow us to balance brevity and clarity with context and human-centeredness in our insights. 

Hungry eyes?

Subscribe and get inspiring content delivered straight into your inbox