Mountains of feedback stored in indecipherable spreadsheets that nobody has time to read, let alone make sense of—we’re simply not going to stand for it. That’s why we launched channels in beta. It’s a new way for product teams to automatically turn continuous user feedback into real-time insights.
Along the way, we learned a lot about AI, getting the timing right, and building with our customers. Today, I’m sharing our learnings in hopes that it will help your development process.
For years, product teams have struggled to analyze large volumes of customer feedback efficiently; they’re disconnected from their customers and have difficulty prioritizing user problems.
The problem isn’t new. In fact, in 2022, we formed a small working group to try and solve it. The group was small to optimize for speed of decision-making rather than driving toward company-wide consensus (at the time, the company was about 50 people). They took a design-led approach to develop a product called Signal, along with user journeys, user roles, and a concept to satisfy the demand for high-volume feedback analysis.
Last year, AI exploded, unlocking mainstream adoption of AI and large language models (LLM). It made the concept of channels viable, and the timing was right.
So this is where we got to work. We built an AI pilot team that was split off from the rest of Dovetail and focused on building the future of our product. We didn’t want the whole Dovetail team distracted by the shiny new thing, but we knew we couldn’t ignore it either.
We wanted to validate technical feasibility, market desirability, and product viability, so we built a lean cross-functional team of four engineers, a designer, a researcher, and a product marketer.
There were two streams. Our engineers were given free rein to play with this new technology, discovering what’s possible and pushing it to its limits. Meanwhile, design, research, and product marketing immersed themselves deep in the market and our customers, validating customer needs, exploring feature ideas, conducting discovery calls, tracking assumptions, concept testing, and co-designing. Our goal was to show some progress every day.
The small team allowed for rapid iteration, while the remaining Dovetail team continued to work on improving the core experience.
Technology is, and always will be, constantly changing. AI is no different—we continuously learn about new approaches and providers. As a result, the team needed to create a system that was flexible, adaptable to change, and secure. We focused on choosing the right service and tailoring models to deliver high-quality results.
When choosing the right service, we needed to balance confidence in the technology with the need for high-quality outputs. We opted for a stable and secure provider, AWS, with the flexibility to integrate others in the future. Adaptability was critical to our development process and ensures we can continuously improve channels as AI technology evolves. It means what we’re building is unique and purpose-built for uncovering insights while protecting customer data.
From the outset, we decided to focus on the quality of the results we generated. We went deep into prompt engineering (129 lines deep), providing many questions and considerations to refine and produce the most valuable results. However, we were not happy with the results. We were seeing many duplicate themes and themes that were too specific. So, as a test, we scaled the prompts right back and provided more high-level, straightforward instructions. It resulted in much broader results with fewer duplicate themes, which was much more useful. Surprisingly, simpler prompts yielded better results with less redundancy and improved scalability. Prompt engineering is an evolving field, but for now, keeping it simple at Dovetail seems to be key.
For us, the key to success was testing and an adaptable infrastructure in the ever-changing world of AI.
We prioritized user needs throughout the entire development (and still do) by speaking to customers in early concept testing, usability testing, and our Alpha program. It allows us to get quick feedback and build confidence that we’re heading in the right direction.
Early research exposed two key needs. First, everyone on a product team, from designers to researchers to PMs, needed to access customer insights. This meant presenting information in a clear, concise way that catered to diverse user needs. Second, we saw the varying levels of AI experience across teams. This meant including automated feedback clarification and the ability for users to tailor themes based on their specific needs.
The Alpha program identified initial data import challenges. The team responded by prioritizing integrations and simplifying the process, replacing cumbersome CSV uploads with a more user-friendly experience.
This close collaboration with customers gives us confidence that channels solves a critical real customer problem. The lessons learned will to guide the future of our product.
Don’t let feedback go to waste.
Get started for free
or
By clicking “Continue with Google / Email” you agree to our User Terms of Service and Privacy Policy