Why we should stop romanticizing AI: a case for keeping AI in the engine room, not at the helm
Over the past few years, organizations have felt immense pressure around AI. The imperative has been simple: adopt or die. But in the rush to integrate AI into every department, team, and workflow, many leaders have confused speed with direction.
In that confusion, many organizations are quietly giving away the very muscle that made them competitive in the first place: discernment.
Rashina Bhula leads the Design Research team at Runyon, a New York-based product design and venture studio. The sentiment around AI, especially in the large, legacy industries I work with at Runyon, has shifted. Last year, the panic was, “The train is leaving the station without us.” Now, it's more anxious; a resigned, “We can't afford not to.” Every executive or investor we work with asks the same question…“What’s our AI strategy?”
The truth is: if you are a company undergoing digital transformation, the utility of AI is undeniable. AI can be an essential lever for solving customer problems faster and hardening your business against agile competitors.
But the pressure to deploy now is immense. In sectors like financial services, insurance, and wealth management, for instance, there's a deep-seated anxiety about being left behind. The temptation is to rush in, talk about cost reduction and faster decision-making, and just do something. And we see this pattern repeatedly: organizations putting AI at the helm before they’ve clarified where they’re actually trying to go.
Adopting AI without deep intentionality comes with a heavy price. The biggest risk isn’t a cumbersome tech stack or a bad algorithm. It's cognitive atrophy at an enterprise scale.
The 80/20 trap: are we outsourcing the wrong work?
Will AI slowly strip away our ability to think, learn, and develop the intuition that allows us to do our work well?
Many of us bought into the early promise of AI. We pictured it as the perfect assistant, the ultimate intern. It would take on the 80 percent of our work that we all dread—writing mundane emails, scheduling meetings, summarizing notes, all the administrative workflow stuff. This would, in turn, free us up to focus on the 20 percent of high-impact, deeply human work: strategy, creativity, and critical thinking.
In reality, something backward started to happen.
Large enterprises and even consulting firms started outsourcing their thinking. They began feeding AI the hard problems, asking it to define strategy, craft core messaging, and do the heavy cognitive lifting. They handed over the 20 percent, the very work that builds expertise and provides fulfillment.
Why? Because strategy feels like a safe sandbox for experimentation — low immediate risk, high perceived upside.
The danger is that the muscle for high-level strategic thought is incredibly hard to build back once it's been delegated to AI. It’s far harder to regain your lateral thinking skills than it is to learn to write a good email again.
There is a valid fear about AI taking jobs in the design and product world. But my fear isn't that AI will take my job overnight. My fear is that we’re all walking down a slippery slope where AI slowly strips away our ability to think, learn, and develop the intuition that allows us to do our work well. When you apply that microcosm to an entire enterprise, you risk driving cognitive atrophy at scale — a slow erosion of strategic instinct that no dashboard can replace.
And that’s a pitfall that is incredibly difficult to climb out of.
A case for intentionality
Is your AI strategy anchored in intentionality?
If we’re going to avoid this trap, we have to employ ruthless discernment. This isn't about if we should use AI; it's about where and why. For the work that I do with partners, that discernment comes down to two key principles.
Avoid automating the moments that build trust
The industries I work in, for example, insurance and wealth management, are built on human relationships. Yes, there are tools and software used in these industries, but the products are often sold by an agent or an advisor. These relationships matter most during key life events, both high and low: buying a first home, the birth of a child, the death of a loved one, or coming into a sudden inheritance.
You have to be incredibly careful that the part of the process you're outsourcing isn't the part that builds trust from day one. The customer who is in deep grief after filing a life insurance claim, the aging parent who is trying to understand how to set up their estate for their now-adult children, or the client who just received a financial windfall after selling their company.
There are self-guided experiences that leverage AI in each of these life events, but they’re also the moments where a human connection is most critical.
Use AI as an amplifier, not a North Star
We’re seeing strategy decks and operating plans with “AI-first” mandates. But many of these mandates are less about conviction and more about needing something defensible to say in the next board meeting.
AI is not the ultimate goal. The goal is your mission: providing a service to real people. AI is a means to that mission, not the mission itself. ‘Outcomes-first, human-led, AI-amplified’ is a far more durable stance.
Your guiding star should be a better end-user experience, a more resilient business model, or a more trusted relationship with your customer.
In all of these areas, AI is just an amplifier. The goal has always been to solve core customer problems. AI doesn't change that, but it can help us do it better. It should be about augmentation, not replacement.
How to adopt AI meaningfully
What does it mean to adopt AI meaningfully?
So, how does a large, complex organization do this in a way that actually helps the end user?
It starts by stopping the romanticisation. Treat AI as a capability lever, just like any other tool in your product strategy.
Start with strategy clarity. Don't start by asking, “How should we use AI?” Start by asking, “What are the business outcomes we’re trying to achieve?” Get as defined as possible: reduce turnaround time, increase productivity in a specific lane, lower friction at a key moment.
Map the experience. Once you know the outcome, do the work. Map the user journey. Understand the business processes and systems. Identify the real pain points.
Apply AI with intent. Now, look at that map and identify where a human touch is critical. Where does judgment, trust, or empathy matter most? Then, identify where augmentation makes sense. This isn't a binary; it's a spectrum. Some tasks remain fully human, some can be fully automated, and most will live in the middle as a human-AI collaboration.
Importantly, when you anchor your strategy in the user rather than the technology, you avoid the trap of simply bolting AI onto broken legacy processes. Instead, you fundamentally redesign workflows where human-system collaboration and augmentation exist solely to elevate the user experience.
The irony: AI can make us more human
When I talk to professionals, like financial advisors, their fear is palpable. “I've been doing this for 25 years,” they might say. “Nobody knows my clients like I do. I know their dogs' birthdays. I know to call them on Valentine's Day because it's the first year she's a widow. AI could never do that.”
And I say, “You're absolutely right. It shouldn't.”
Even if AI could replicate that, it wouldn't fulfill the human need for connection that you're satisfying in that moment.
But then I ask, “What if I could give you a tool that quietly takes all those handwritten meeting notes and administrative tasks off your plate? What if it freed you up to have 10 or 15 more of those human-to-human phone calls every week?”
You see them light up. "Oh my God. Can I have that tomorrow?"
At Runyon, this is the work we’re most often pulled into: helping large organizations decide where AI should stay invisible, where it should meaningfully augment human judgment, and where it shouldn’t be used at all.
That’s the secret. That’s the balancing act. When we stop romanticising AI and put it in its proper place, in the engine room, we free ourselves up to do the one thing it can't: be human. By automating the 80 percent of tasks that drain us, AI has the potential to unlock more time for us to build trust, make connections, and ultimately be more human.
Rashina Bhula leads the Design Research team at Runyon. She works with Fortune 100 executive teams in highly complex industries to design AI-enabled products and operating models without sacrificing trust, judgment, or human connection. She also leads venture incubations in fintech, insurtech, and specialty consumer markets. Outside of work, she has developed and taught graduate-level design thinking curriculum at MIT and Parsons School of Design. Her past partners include NASA, Teach For America, and the Chan Zuckerberg Initiative.