Insights — September 13, 2023

AI and human-centered user research: a conversation

A team of Cake & Arrow researchers and designers sit down to discuss what a human-centered design practice that utilizes AI might look like

by Cake & Arrow

Ai hand reaching out

The speed at which AI is advancing and transforming industries is perhaps unprecedented. Since launching in November of 2022, Chat GPT has made waves in nearly every industry, from news and television to finance and insurance. Recently heralded as the most disruptive tech of the century, advances in AI have kindled widespread debate and endless think pieces about who and what the technology will and will not be replacing.

As human-centered researchers and designers, the team at Cake & Arrow is continually assessing emerging technologies to weigh their potential uses against our commitment to putting people first. We’re always asking ourselves how we might use new technologies for our own work and for our clients in a way that still centers human needs above all else. AI, of course, is no exception. It’s a phenomenon that, like the rest of the world, we are currently grappling with, experimenting with in our own practice, and of course talking a lot about.  

A few members of our team—Mike Piastro (UX), Kate Muth (Strategy), and Emily Cardineau (Marketing and Insights)—recently sat in on a webinar with leading UX researchers from across industries entitled How user researchers can partner with (and not be replaced by) AI. This webinar, and the ongoing conversation among our UX peers, sparked a lively discussion about how we at Cake & Arrow think AI can best augment and supplement our practice, use cases in which it might be misused or abused, and what the future of our research practice might look like with AI in the mix. 

Here’s a transcript of that conversation, edited for flow and clarity.

____

An emerging perspective

Emily (Marketing and Insights): So we’ve been talking internally about AI, and sharing what we’re reading, attending webinars, and experimenting with some of the tools on our own. That said, I think we all agree that we have not yet fully metabolized how AI will impact our research and design practice. Where do we stand? 

Mike Piastro (UX): So far, a lot of the conversation around AI and UX I’ve heard has seemed more academic. I’m hearing a lot of focus on the strategic level–talking about ethics around AI. Less about its practical application to user research. As designers and researchers experiment more, I’m looking forward to case studies showcasing how AI is being utilized for UX in different organizations and discussions of specific tools and their impact.

Emily: I agree. I’ve also heard a lot of philosophizing and conjecture without much of a picture about how AI is actually being used. Makes it hard to assess any so-called “threat.”

Kate (Strategy): I feel like right now, many of us sharing strong opinions haven’t actually used AI in any significant way yet. It makes me wonder who is actually using AI for UX research right now? Maybe it’s more development-focused practices that weren’t doing a lot of UX research in the first place, or people who do a lot of usability testing rather than exploratory, qualitative research. Or maybe no one is using it…

Emily: Totally. Seems like maybe the technology out there—even tech like Synthetic Users—isn’t so much replacing research, but supplementing design processes that weren’t doing much research in the first place.

Mike: The fact that this all seems very nascent makes me think there’s still a lot of opportunity to get in on the ground level and experiment with it, even if it means taking the validity of it with a grain of salt.

Augmenting research or replacing researchers?

Mike: At this point, we probably can’t just say, ok cool, here’s this AI transcript analysis tool that’s going to pull out all these trends from our research and group people into personas. We’ll still need to do our own analysis to validate. We need to see whether the AI is adding another layer that maybe we wouldn’t have gotten to on our own.

Kate: Right, for me, insight doesn’t come from reading insights. it comes from doing the work. The thinking happens while I’m performing the analysis. I don’t know that I’ll ever feel comfortable just pushing a button to do the analysis for me. A tool like that would replace researchers, so it’s little wonder we don’t have much enthusiasm about that.

Mike: You have to find the big themes for yourself, and that requires using empathy to figure out how this translates to design. There’s a creative leap that needs to happen, and that can only really be done by a human, in my opinion. 

Maybe the idea is that the tech does come to a point where it draws conclusions based on the research and makes the leap for you right in the design file and it rewrites or repositions things to make a new design. But that’s obviously way far off… Right? 

Kate: Right? But from a business perspective, organizations could use more basic AI analysis to get user “insights” and a design brief without having to invest much. For a lot of companies, that would probably check the box that they’re doing their due diligence through user research, even if the insights aren’t valid and won’t actually help them make better design decisions.

Mike:  Exactly. I think what’s dangerous is that you may start to get more and more of these unvalidated models pushing unvalidated conclusions. Maybe in some amazing future state, insights are somehow tied into real-time updates on a website and real-time feedback gathering through A/B testing so you can actually measure and validate some of it. Barring that, in many cases you’ll go months and years with these conclusions driving design decisions before anything gets launched, and by that point it’s not super measurable. 

Adopt AI or get left behind

Emily: As much hype there is around AI, I generally feel the same way about it as I do about any new technology: If it helps me, great, I’ll use it, I’ll adopt it. Why not? I’m not afraid of it. But I want to use things that actually help me. 

And that’s where the human-centered part of it comes in—if these tools aren’t making the quality of my work better, and if they aren’t making my experience doing that work better, then what’s the point? Just being able to say, “we use AI?” To me, it’s really important that we are tying these tools to actual results so we can use them in a smart way and defend our decisions around when and when not to use them.

Mike:  I agree. But right now, for many of us, figuring out how to integrate AI into the research practice seems more about maintaining career momentum. Understandably, none of us want to fall behind and miss the acceleration of this technology. And we’re all in an environment right now where there’s some pressure to check that box for our company. 

It feels very much like what we’ve seen with other technologies. Some work out, some don’t, but you have to get in there and start using them to see if there’s anything there, and if so we benefit from having been early adopters.

Kate: Right now, my exposure to AI is basically Chat GPT, which feels like a productivity tool that I can choose to use or not use in my own work without much impact. But if I look farther out, I can definitely see AI replacing some of the more technical, quantitative types of UX research and usability testing we do. 

This would free up our researchers to spend more exploratory time with users, and get to the deeper, more human insights. And then, after we’ve explored the high-level needs and behaviors, and we’re evaluating things like button colors, run a program to test the infinite variables and optimize.

Today’s experiments

Emily: What thoughts do you have on how we might use AI in our own research practice at Cake & Arrow?

Kate: I’ve used it to generate qualitative interview questions within defined parameters for specific types of users. For example: “Give me a list of 10 questions to ask auto insurance buyers about their preferences.” I’ve also had it write workshop agendas: “Give me five ideation activities we could do, with descriptions.” Do I take it verbatim? No, but it gets me started. But these use cases aren’t specific to UX research, they’re variations on the ways many information workers are currently using it.

Mike: There’s also potential for AI, at least theoretically, to excel in analyzing and recognizing patterns that might elude human perception. If AI can read an interview transcript and, for example, produce visually appealing representations like a 2×2 matrix or a word cloud that lines up with my own observations, that would be a valuable shortcut. It’s more of an added value than a replacement for core analysis, in my view.

Looking ahead, the idea of using AI for testing, to evaluate screens and statistically determine which click path is more likely to succeed, is also really intriguing. Although it doesn’t seem like the technology is there… yet.

Already, I’m using it to help me acquire a very basic level of understanding of a topic. I’ll ask it things like: “What are the coverages for home insurance?” or “What are some reasons people might want to customize home insurance?” This can help me more quickly do things like stub out user stories or provide an outline for an interview script. 

The human touch

Emily: One thing that’s come up in this conversation about AI perhaps playing the role of UX researcher is the idea that people hate talking to chat bots. I took note when a panelist on a webinar we attended called it a “tedious and painful experience” and it made me think about how the quality of the conversation would change if we were trying to talk to real people through automation tools.

Sure, AI can write a news article, but does anyone want to read an article written by a robot? Maybe if you’re looking for, say, a recipe. But if you’re looking for an interesting perspective on something, do you care what a robot thinks? There’s just something about the human element of communication that is about more than acquiring facts or information but about connecting with other people. In my experience, this is the magic of qualitative research. Two real people connecting. 

Kate: I used a chatbot this morning to get a return label for a backpack I’d bought my daughter. After the usual back and forth, I realized I was being excessively polite to the robot—typing “please” and “thank you,” “if you don’t mind”—like I was worried that it would sort me onto the customer service blacklist if I was rude. 

And this makes me think about how in our research work, we already have similar issues with interviewees playing up to the “machine” that is us—telling us what they think we want to hear, trying to sound smart. And if we don’t act to counter that behavior, it can impact the validity of our design decisions.

But when I’m the one moderating that interview and I feel like that’s happening I can push and change my tactics. I can go off script, I can tell them a story about my kids or ask them about theirs. I can break the 4th wall.

Mike: That’s a great analogy. In general, we find moderated research so much more valuable than unmoderated because we get such rich context and understanding. With unmoderated, which is computer intermediated, you’re not actually observing people do things on a screen, you’re just seeing the results: 60% of people did this, and 40% of people were able to do that. You’re not getting the context. When you use technology to mediate the interaction, you lose the empathy. And the whole point of user research—and possibly the whole point of user experience—is empathy with the user. 

Kate: Well, it’s like Emily said, who wants to read an article written by a robot? It dehumanizes the user and if they know they’re interacting with a robot, they may behave more robotically because they know there’s no soul on the other side.

Emily: I’m thinking about some of the conversations I’ve had in user research. Like during a conversation about personal finance, one woman shared her feelings about her husband dying and how hard things have been since. And I wonder, would she—or anyone—have that conversation with a robot?

But then again we’ve all seen those stories about people having these long deep chats with AI versions of their dead loved ones that help them grieve. So maybe I’m wrong, and people can actually go deep with a robot. But I do feel like the context for how you get people to engage in that way seems incredibly hard to figure out. What would it take to create a responsive, empathetic AI moderator?

Kate: There are hundreds of recordings of me moderating interviews on our servers. I know AI could use that data record to replicate my voice and conversational tics. But could it create a “Moderator Kate” who goes woefully off script the way the real Kate does? Possibly, but for efficiency’s sake, we’d probably program it to stick closer to the plan.

And if that’s the case, then why bother having real participants who might themselves veer off topic when we could use synthetic users who would just answer the question? And then why bother going through the theater of having fake people talk to each other in the first place? What fresh hell comes of a conversation like that, and what insight do we lose when we lose spontaneity?

Mike: That’s the ultimate promise and terror of AI, isn’t it? It’s the Terminator movie where the tech just replaces all of us and doesn’t need us anymore. 

But, more practically, maybe rather than becoming “Moderator Kate,” AI just listens to Kate do her interview and, in the same way an AI tool might correct grammar in real-time, it could give real Kate live feedback on her interview style, like “you’re skewing a little negative.” That could be a useful way to augment our work.

Kate: That would be more realistic, and more helpful. Thanks, Mike.

Researchers or ethicists?

Emily: It does seem like most of us are comfortable and feel good about the idea of AI as a tool that can help and enhance our work. But it’s hard to leave it there; It’s so easy to see the sort of dystopian trajectory, if the technology is really all it’s cracked up to be.

Kate: Yes. I’m hearing an idea discussed among UX researchers—that we might redefine our role to be more like ethicists tasked with saving users from themselves. We’d lean further into the idea that user research shouldn’t just be about what users want or what their preferences are, but about their latent, inarticulate needs. 

Emily: And this gets into the realm of behavioral economics, where you aren’t just trying to satiate the user, but you’re trying to change a behavior or achieve a specific behavioral outcome…. get them to stop drinking or take their medication. I used to work in education, and when I did research with teachers, we were always trying to contend with the gap between what the teachers wanted and what the evidence base said they should be doing. Our role was to design tools for them that were evidence-based and user friendly.

Kate: Right… We’re always incorporating the needs of the user and the business in our product design, but we could all do more to consider the communal, societal, environmental and moral contexts as well.

Mike: And if you, as the researcher/ethicist, are not doing that, and let these tools do the thinking for you, what happens to the field of UX? You’ll have a generation of new people that just never learned to do the critical thinking necessary for good design. And when this happens, what happens to the products? Do they get worse? And how do you know if AI is feeding off worse design?

Emily: I think that the critical thinking, the analysis, the problem solving—the things some people are claiming these tools can do for us,—is where a lot of people, especially people like us, find joy and fulfillment in our work. It’s like what many have said in reference to Chat GPT: If writing is thinking, and thinking is what makes us human, then what happens when AI is doing the writing for us? Do we stop thinking? Do we stop being human?

Kate: Look at us. Getting all philosophical after starting this conversation wishing everyone else would be more practical. It seems we can’t help ourselves. 

Emily: There is something about AI that just kind of takes us there.

Mike: Yep. I think that means it’s time for us to actually dig in and start using some of this stuff!

(Articles only appear in the frontend.)

Press the pencil to add content.
Press the pencil to add content.