Do we really need to talk to users?
It’s a question most experience designers have heard at least a couple times, probably more, throughout their careers. Personally, I’ve heard some form of it asked about a dozen times in the past month alone.
We’re reviewing a proposal with a client—maybe for a new employer benefits portal or an updated agent quoting experience—or I’m walking through a research plan with the team, and someone pipes up. Maybe it’s a prospective client, a project sponsor, a key stakeholder, or one of my teammates. Or maybe it’s even the whispering little devil on my own shoulder.
Do we need to talk to users? Really? Again?
Typically, this question isn’t actually about whether research is valuable—luckily, most now know not to argue that point—it’s about whether we can rely on what we already think we know. Whether we can count previous studies or other kinds of feedback as enough user research for this project.
For many organizations, user research can look like an unnecessary step, an extra line in the SOW before the “real work” of design begins.
This hesitation usually comes from an understandable place. Insurance companies gather information constantly and pride themselves on having strong relationships with their agents and customers. Service centers log thousands of calls from employers, brokers, and members. Producers relay what they’re hearing from clients. Many carriers maintain agent councils or broker advisory groups to surface feedback from the field.
It’s an industry built on assessing risk that rarely suffers a shortage of data.
So the question makes sense: if we already know so much about our customers and users, why spend any of our finite time in Zoom rooms sharing screens with them?
Most insurance research wasn’t scoped to inform design
In many ways, insurance organizations are rich in insight. They track satisfaction scores, analyze service calls, gather feedback from agents, and conduct broader market research about customer perspectives.
But most of this research was never intended to inform design decisions—it wasn’t built to observe behavior, understand context, or evaluate how well solutions actually work. It was conducted to measure sentiment, monitor service performance, or understand broader shifts in the market. Many of these inputs rely on what people say rather than what they actually do in practice. As a result, they rarely reveal what people are actually trying to do, what gets in their way, or how they adapt when the system falls short.
At the start of every project, we ask clients to share any research they already have and what they know about their users. We want that context—it’s useful. But it’s also where things can get a little murky. Too often, clients assume this information can stand in for original user research.
What we usually get looks something like this:
1. Insights from the distribution channel
Agents and brokers speak with clients every day, and their perspective can be incredibly valuable. Advisory councils and producer groups often surface patterns about what clients find confusing or difficult to explain.
The issue with these insights is that they are second-hand. Agents interpret what they hear through the context of their own workflows and priorities, and often pass it along in ways that reinforce them. Their perspective is essential for understanding the distribution relationship, but it doesn’t always reveal how policyholders, internal employees, or agents actually behave as they get a quote, underwrite a policy, process changes, answer service inquiries, or navigate claims within the digital experiences we’re being asked to design.
2. Internal market or insights research.
Many organizations conduct broader studies about customer sentiment, product perception, or emerging trends in the market. This research might tell us what people say they want or how they feel about insurance, but not how they actually behave in practice—how they make decisions, work around constraints, or respond to solutions in real-world contexts.
3. Stakeholder interviews
At the start of many projects, we talk with product owners, service leaders, and operational stakeholders who understand the systems inside and out. These conversations are essential. They help us understand business priorities, technical constraints, and how work is supposed to flow.
But they’re not the same as user research. Even when stakeholders do their best to represent the user point of view—or when they are themselves agents, underwriters, or service representatives—they’re still operating within the context of the organization. They’re cued to speak on behalf of a function, a process, or a set of constraints, rather than reacting as individuals navigating the experience in real time.
This distinction gets especially blurry when we’re designing tools for internal users. Stakeholder workshops and internal input can feel like a proxy for user research. But insights gathered in these settings tend to reflect how work is intended to happen, not how it actually unfolds day to day—where people adapt, work around limitations, or respond to the tools in front of them. When too much weight is placed on these perspectives, projects can drift toward internal consensus rather than a clear understanding of user behavior.
3. Usability testing on current systems
Some organizations conduct usability studies on existing tools or platforms to identify where users struggle. This research can be incredibly helpful for pinpointing specific issues—where people get stuck, what’s confusing, or where errors occur. But it’s often scoped to evaluating a defined set of tasks within an existing system. It tells us how well the current experience performs, not whether the right problems are being solved in the first place. Without a broader understanding of users’ context, needs, and behaviors, usability findings can lead to incremental fixes rather than uncovering larger opportunities. This is where starting with the method—testing, interviews, surveys—can be misleading.
4. Customer feedback programs
Satisfaction surveys, NPS programs, call center reports, and complaint tracking help organizations identify where customers encounter friction. But this information is typically reactive and self-reported. It captures moments of feedback. Not the full context of how work unfolds. As a result, it often highlights pain points and breakdowns, without revealing the broader behaviors, decisions, and workarounds that shape the experience over time.
All of these inputs can be useful lenses through which to view the user experience—but they’re not the same as looking at it head-on. And when that’s the primary way we’re seeing the experience, the work tends to follow.
For example, when design work relies primarily on service data and complaints, it skews toward simply fixing what’s wrong. Designers and product teams simplify a form, reorganize information on a page, reduce clicks, or clarify instructions. The system might be optimized, but is the experience fundamentally better?
What’s missing is an understanding of behavior in context: how people actually get work done, what they need in the moment, and how they respond to the tools in front of them. Without that, we risk improving the system without meaningfully improving the experience.
User research reveals what other insights miss
Good user research in insurance involves spending time directly with users and observing how they actually work in context, asking questions, and exploring how they respond to the tools and constraints around them. When we do this, their experiences almost always look a bit different from the process diagrams drawn inside the organization.
We’ve observed agents preparing quotes. Analytics tracks their clicks through carrier portals. NPS programs flag general dissatisfaction, and call center logs capture moments of friction.
But watching in real time, we see them reference homegrown spreadsheets in another tab or track submissions on the “big board” behind their desks. We hear quick calls to underwriters to check eligibility and we clock the impatience triggered by slow load times.
These are the kinds of details that don’t show up in dashboards or reports. But they’re often what inspire new features, more intuitive interfaces, and better prioritization of what actually matters to users.
This is also where direct engagement becomes especially important. People are often very good at showing you how their work happens—and explaining why—but much less reliable when asked to prescribe what the solution should be.
Designing beyond the system
When teams begin to see firsthand how work happens, the design opportunities start to crystallize.
Insurance organizations understandably devote much of their attention to maintaining and improving the systems already in place. Policy administration platforms, quoting tools, enrollment systems, and service portals must continue to function reliably while the business evolves.
But focusing too narrowly on those systems can create a kind of tunnel vision. Design becomes about optimizing what already exists—improving a screen, simplifying a workflow, clarifying a form. User research in insurance (or any industry) will widen the lens.
Instead of looking only at the system, teams begin to look at the broader environment. They start asking different questions: What work is happening outside the system? What tools, notes, or conversations do people rely on to get the job done? And why do those extra steps exist in the first place?
Those questions often reveal that the experience is bigger than any one tool or interface. This broader view shifts the conversation from “what are people asking for?” to “what problem are they actually trying to solve, and why?”
And once teams see that bigger picture, it becomes possible to move beyond incremental improvements and start rethinking how the experience as a whole could better support the people doing the work.
The ROI of user research in insurance
From the outside, user research can look like something that slows projects down—another round of interviews, more insights to review, more time before the Figma files light up.
At Cake & Arrow, we’ve learned over time that the opposite is true. As we’ve evolved from our early days building for the web into a human-centered experience strategy and design consultancy, user research has become a non-negotiable part of our process. We insist on it in our projects because we know it helps us get to solutions that work—for both the business and its users—faster.
When project teams observe real workflows early on, they gain clarity about what problems actually need solving. That clarity reduces time spent debating assumptions, aligning stakeholders, or iterating on ideas that don’t reflect real user behavior. It also helps identify longer-term opportunities and tradeoffs that may not be immediately visible in performance metrics.
This is also where some confusion comes in. Small, qualitative studies like those we recommend for our work aren’t designed to produce percentages or validate preferences. They’re meant to uncover patterns, behaviors, and deep understanding. That kind of insight is what helps teams move forward with confidence.
As a result, instead of discovering gaps halfway through development, or worse, after launch, teams can address them before significant time and resources have been invested.
In practice, this means projects move faster, decisions get made earlier with more confidence, and clients avoid building solutions that ultimately need to be reworked.
Which is why when someone asks at the start of a project whether we really need to talk to users, my answer is always the same: yes, absolutely.
To design experiences that truly support users, we have to start by talking to them.