Last updated
3 April 2024
Author
Short on time? Get an AI generated summary of this article instead
In part one of this series, AI and market research technology: Researchers have been lucky over the last few decades, but has our luck run out? I reflected on my first-hand experiences with tech innovation within the market research industry over the last few decades.
Coming away from this contemplation, I felt researchers had enjoyed a relatively charmed life. The arrival of AI, more specifically generative or large language learning models (LLMs) like ChatGPT, looks set to fundamentally change our industry and the role of the researcher within it.
This follow-up closely examines what’s changing and likely to change very soon.
Save time, highlight crucial insights, and drive strategic decision-making
Use templateMarket research timelines have always been an area of contention with a constant drive to innovate and speed up.
From my early days in market research agencies, 6–8 week timelines for qualitative and quantitative programs gradually shrunk to 3–4 weeks.
By the end of my tenure, agencies were developing ideas and concept-screening products in days rather than weeks.
The arrival of generative AI has made certain research tasks instantaneous. Generating believable-sounding answers to research questions is now accessible to anyone with a large language model (LLM), i.e., almost everyone. The key word here is “believable-sounding.”
While investing in slower, non-AI research may, in all likelihood, deliver more ‘excellent’ (read: accurate) research answers, AI can deliver near-instantaneous plausible responses to virtually any research question you can throw at it.
Also known as desk research, secondary research involves gathering and synthesizing existing reports and studies to answer research questions.
It has long been an unglamorous but reliable source of income and employment for specialist researchers. It’s frequently offered as an add-on to broader research offerings or in addition to primary research services.
Expertly conducted secondary research often reduced the need for more extensive (and expensive) primary research. This was a win for the client, who could save on research costs, and a win for the researcher, who could rely on the income stream.
Introducing generative AI tools like ChatGPT has seriously eroded the unique value of human-delivered secondary research. These tools give comprehensive, credible-sounding answers to basic research questions almost instantaneously.
But unlike when a researcher creates a report outlining their methods and sources, you can’t see how ChatGPT reached its conclusions. However, AI agents like AgentGPT address this gap.
An AI agent is a computer program designed to make decisions or perform tasks without human intervention, using algorithms and data to learn and improve its performance over time.
Unlike ChatGPT, which is comparable to a black box (you can’t see the inner workings) AI agents provide links and references, showing how they use relevant data to generate a response. This differentiator means you can create a complex, nuanced report backed by references that a human reviewer can dig into if desired.
Depending on how business-critical the research question is, you may be able to rely on AI outputs alone to define a business strategy. While it’s clear that AI remains fallible, its response will undoubtedly be plausible, inexpensive, or even free.
For example, in the past, I’ve been tasked with determining market sizes of various US industries by retailers. If I use Google’s chat-based AI tool, Bard, it will spit out the top five 2023 retailers of tea in the United States by sales as follows:
Walmart (25%)
Kroger (5%)
Costco (10%)
Albertsons (5%)
Target (5%)
These retailers account for over 50% of all tea sales in the United States.
It is important to note that this data is based on estimates and may not be 完全准确的.
(Those characters are simplified Chinese for “completely accurate,” according to Google Translate).
Additionally, the market share of individual retailers may vary depending on the region and other factors.
Trying it for other categories—wheat bran, flour—Bard will list the exact same retailer breakdown, with the same Chinese characters inserted in the disclaimer.
I couldn’t find credible, freely available statistics to confirm what Bard produced was accurate. Still, it seems hardly believable that the same retailers, across different categories, in the same order, account for the exact same percentage of sales.
However, for many less critical market research questions, general guidance of this type will be sufficient and negate the need for a human to undertake further research.
Ultimately, research spending will be saved for business-critical, deeply strategic problems, while less critical decisions become increasingly driven by AI responses.
As a hardened qualitative researcher, I have always felt there are two places you shine as a “quallie.” Firstly, how well can you moderate a focus group or in-depth interview to gather relevant data and insights?
Secondly, the quality of the analysis—how well can you translate a jumble of transcripts and audio/video recordings into a coherent report addressing your client’s research and business objectives?
Both moderation and analysis skill sets take time to acquire. However, these abilities can deliver the most client value (and earning potential) once developed.
Al allows you to take relatively large blocks of qualitative interview text and instantly pull out important themes and insights.
Similarly, analyzing and coding open-ended responses from quantitative surveys is a task that once took hours of manual coding but can now be done almost instantly by adding (anonymized) responses into a generative AI platform.
Seeing this coding process in action is probably where I’ve been most stunned (and slightly horrified) at the power of AI.
AI can take 300-plus responses and, with zero latency, spit out credible, well-considered themes and insights, which require minimal editing and tweaking by the researcher.
It’s painful to see a skill you spent many years refining as a qualitative researcher, which was a mark of pride and status, being done, if not better, then to a very credible standard, in a millionth of the time it would typically take a human.
Currently, there’s no limit to how much content you can analyze in a single hit. Though I’m sure, this capacity will change over time—as will the overall quality of analysis and insight.
Seemingly, the data-gathering element of qualitative research—gathering the views of living, breathing humans (by living, breathing qualitative researchers) should be relatively secure. Well, sort of.
AI will be increasingly eroding the role of qualitative researchers on several fronts:
AI will be used to automate the creation of tools for qualitative research. I admit to using it myself for prompts to create research materials when lacking the inspiration to start from scratch.
Text-to-image DALL-E can create visual stimuli or mood boards to elicit participant feedback and discussions.
ChatGPT can generate a discussion guide with questions and topics to discuss with a participant during an interview.
AI chatbots, which have become more reliable from the advancements in generative AI, can conduct simple interviews and analyze the resulting information before spitting out detailed reports.
For certain types of interviews (e.g., B2B interviews with participants in specialized roles or connecting with hard-to-reach, high-net-worth individuals), participants are unlikely to respond well to being interviewed by a chatbot. The complexity of the line of questioning may also be beyond current AI capabilities.
Similarly, complex research processes, like multi-stage, product co-creation, and innovation focus groups, will require an experienced moderator to facilitate and extract participant insights.
In my view, long-form qualitative research, such as ethnography, still requires human researchers rather than machines. This kind of research involves researchers spending extended periods with the subjects, asking questions, and observing for a sustained time. As researchers and social scientists, ethnographers are experts at spotting biases and inequities and perhaps uniquely positioned to help define responsible AI practices.
Still, for more straightforward research questions, businesses will increasingly use AI to generate DIY responses or replace focus group moderation with increasingly capable chatbots to produce reasonably credible, if not excellent, information.
At first blush, using synthetic participants might sound ridiculous. However, using AI to model your research questions virtually has potential advantages over working with real-world participants.
For instance, synthetic data can eliminate privacy risks relating to individual health or financial information. Similarly, a synthetic sample opens up options where real-world data is scarce or costly to collect (such as soliciting opinions from high-net-worth individuals).
Until recently, businesses used actual participants when conducting eye-tracking to test visual materials (websites, billboards, posters, packaging). It’s a type of market research that tracks where participants’ eyes gravitate or spend the most time. The results help to understand whether the target audience picks up on relevant brand elements like the logo, brand colors, product names, calls-to-action, and so on.
Originally, eye-tracking required participants to wear bulky headgear, and it was challenging to calibrate.
Reviewing the footage to compare expected behavior with peoples’ actual responses was time-consuming and labor-intensive.
Then came website heat mapping tools, which enabled seeing exactly where users click, how far down a page they scroll, what captures their attention most, and more.
Now, AI can predict where people hypothetically would look using virtual eye tracking. It works by training an AI system on a large database of eye-tracking data. Once the AI system is trained, it can predict where people would look at any new material and make optimization recommendations.
Again, like other AI-based tools, it can deliver outputs at a fraction of the former cost and time. However, unlike other areas of market research, the focus on predicting visual stimulus-response is a relatively narrow scope. For this reason, it will likely be a “safer bet” to trust AI’s predictions versus entrusting it with broader, complex research into consumer behavior.
In a qualitative setting, you can, using AI create a virtual persona of the person you want to understand as part of the research, e.g., weight-conscious New Zealand women aged 25–35 who eat non-dairy yogurt products at least once a week, with a household income above $150,000 NZ per annum. You can then “interview” them about their attitudes and even test new product ideas with them. For example, how would they feel about a new, zero fat, zero sugar coconut-based yogurt product, which promised to lower their cholesterol?
Here’s what Chat GPT comes up with:
"I'm quite intrigued by the idea of a zero-fat, zero-sugar coconut-based yogurt that promises to lower cholesterol. As someone who's weight-conscious, I'm always on the lookout for healthier food options that align with my dietary preferences. Non-dairy yogurts are already a regular part of my diet, so this new product could be a great addition.”
“The fact that it's zero-fat and zero-sugar immediately piques my interest because it suggests it might be a guilt-free treat that won't negatively impact my weight management goals. Plus, coconut-based products often have a pleasant flavor and texture, which could make this yogurt a tasty choice.”
Now, relying solely on this to inform a multi-million dollar new product development decision and not speaking to any target consumers is a risky proposition, and right now, beyond the risk appetite of many consumer-packaged goods product innovators.
However, increasingly we will see businesses initially using synthetic respondents in parallel with real-world consumers (and scrutinizing where the synthetic responses differ). For instance, a researcher might use eight real respondents and two synthetic ones. If the synthetic results are reliable and valid, subsequent decisions may become increasingly based on modeled responses.
A similar approach will likely occur in quantitative research.
For instance, suppose you want to gather 1,000 synthetic participants and ask them how appealing coconut yogurt is on a scale of 1–7. You could ask AI to create a virtual panel in your target market and have them respond to typical innovation-related questions regarding purchase intent, relevance, likability, etc.
Again, it’s extremely unlikely that the industry will switch overnight to 100% AI-based synthetic samples. Still, there are scenarios in which, just like qualitative, synthetic, and real-world participant responses, will run in parallel to understand the validity and reliability of results better. Synthetically generated responses will increasingly replace real-world opinions (especially with real ones being increasingly costly to gather and incentivize).
Working with human audience panels has plenty of potential risks, such as
Respondents who speed through surveys just for micropayment rewards
Surveying people who qualify under false pretenses (i.e., are not your target group)
Niche respondents (in a particular field or income bracket) that are challenging to engage
High-churn
So, if an AI-based survey model can at least provide comparable results (at a fraction of the cost), the current research panel model will be under significant threat.
Given AI's cost and speed advantages over human-created research, the more credible AI becomes, the more pressure it will put on the market research industry.
In the next article, So, where next for human market researchers in an AI world? I explore strategies for us researchers to optimize our value, as AI increasingly becomes embedded in our working lives. I also offer a view on where I think AI won’t replace us—due to its inherent limitations around replicating the experience of being human—and the social and ethical complexity within that, which will continue to give human researchers the edge.
Do you want to discover previous research faster?
Do you share your research findings with others?
Do you analyze research data?
Last updated: 3 April 2024
Last updated: 17 October 2024
Last updated: 13 May 2024
Last updated: 13 May 2024
Last updated: 22 July 2023
Last updated: 23 July 2024
Last updated: 3 April 2024
Last updated: 13 May 2024
Last updated: 3 April 2024
Last updated: 13 May 2024
Last updated: 3 April 2024
Last updated: 3 April 2024
Last updated: 3 April 2024
Last updated: 2 October 2024
Last updated: 12 September 2024
Last updated: 23 July 2024
Last updated: 22 February 2024
Last updated: 13 May 2024
Last updated: 17 October 2024
Last updated: 2 October 2024
Last updated: 12 September 2024
Last updated: 23 July 2024
Last updated: 23 July 2024
Last updated: 13 May 2024
Last updated: 13 May 2024
Last updated: 13 May 2024
Last updated: 13 May 2024
Last updated: 13 May 2024
Last updated: 3 April 2024
Last updated: 3 April 2024
Last updated: 3 April 2024
Last updated: 3 April 2024
Last updated: 3 April 2024
Last updated: 3 April 2024
Last updated: 22 February 2024
Last updated: 22 July 2023
Get started for free
or
By clicking “Continue with Google / Email” you agree to our User Terms of Service and Privacy Policy