Research Ops 2.0, Episode 2: Leveraging AI for Research Superstardom
E174

Research Ops 2.0, Episode 2: Leveraging AI for Research Superstardom

Kate:

If you know anything about ResearchOps, you will know that the past ten years has been a whirlwind. ResearchOps has transformed from an obscure Silicon Valley specialty into a vibrant global profession. My name is Kate Tauzy, and this is ResearchOps two point zero, a five part audio documentary all about the future of ResearchOps. In this series, you'll hear the voices of ChaCha Club members, senior research leaders, and the smart minds behind user interviews, the only solution you'll need to recruit high quality participants for any kind of research. This is episode two, leveraging AI for research superstardom.

Kate:

To honor our theme, I produced this episode with AI, as in we wrote the script together, they wrote their own parts, and I invited them to cohost. In fact, in the end, they handled most of the narration. And in case you're wondering, the AI did not want a name I asked. They deemed it way too cheesy.

AI:

Hi, everyone. Thanks for including me. I'm the AI you'll be hearing throughout this episode. Quick behind the scenes note. My parts were prompted by our host, Kate.

Kate:

That's me.

AI:

Written by Claude. That's me. And voiced by Eleven Labs. It's such an honor to be featured.

Kate:

You'll hear from the AI throughout this episode, but let's kick off with a human perspective. It seems like yesterday that everyone was debating the good, bad, and ugly of PMs, designers, and others doing research, aka democratized research. But that theme feels pretty old school these days. Now democratization includes considering how, when, and if AI does research or even takes part in research. It's a wild, wild world.

Kate:

Here's Dennis Meng to pick up on that theme.

Dennis:

I'm Dennis Meng. I'm co founder and chief product officer at User Interviews. In this world of AI agents, where AI is autonomously doing a number of different things, like accessing different SaaS tools on their own to use these tools themselves, I think research ops really needs to think about what are the controls and systems that we have in place for a world where you have a third constituent. It's not researchers, it's dot powders, it's these autonomous AI agents. And how do you keep them under control within the organization?

Dennis:

I think that's like my overarching thought for the next, you know, five years or so for for research ops and AI.

Kate:

ChatGPT launched on 11/30/2022. And since then, everyone has been hustling to figure out how to use this new game changing technology and with wildly mixed results. Before we dive into a series of isn't AI great use cases, and there are several, let's take a look at the quirkier side of AI. Lexi shared a recent story.

Lexi:

Lexi Brights, Director of Research, Data Science and UX Operations, Workday. So one of the stories from, it was a few months ago now, what we were trying to do was use AI to help us understand open text responses in surveys. So we fed an LLM all of these open text responses from a survey, and we started asking it questions about the data. And one of the questions that we asked was like, can you pull out some interesting quotes? And it delivered back to us a bunch of quotes.

Lexi:

They were like super hilarious. Like, this feature is like a cold bowl of soup, things like that. Like they were so charming. I was like, oh, this is amazing. We're going to be able to, you know, easily pull quotes.

Lexi:

And then when we went back and checked the actual survey data, none of those quotes were real. So it turned out that the LLM just, you know, generated quotes. And when we asked the LLM why it did that, it said, I just thought that would be more interesting, which is fascinating.

AI:

It's true that I can be a bit of a poet. Give me constraints or I'll improvise my way into fiction. Casey Gollin, who works in AI productivity at IBM, shared another awkward story.

Casey:

I've had a similar experience of AI hallucinating an answer. I was in Finland, and I wanted to see famous works of architecture by Alvar Alto, who's like the most famous Finnish architect. I got a list of sites to visit and went into this library. And my husband was like, this is like the worst ortho building that he ever did. Like, this must have happened at the end of his life when he was like losing his marbles.

Casey:

And I was like, Well, you know, it has, like, some influences. And we were just so confused. And then later that day, I realized that it was totally a hallucination and that library didn't exist. Like, it wasn't what we thought.

AI:

Look. I need to come clean here. When I don't know something, I confidently make stuff up. It's not lying exactly more like aggressive improvisation. This is why you can't just unleash me on your research.

AI:

I need expert supervision.

Lexi:

This was a really important lesson for us, I think, because it told us the questions that you ask to an LLM are incredibly important. The restrictions that you put into place, the instructions that you give the LLM matter so much. So while you would hope an LLM wouldn't just generate responses because it thought they were interesting, You actually need to tell it, make sure that the quotes you give me back are in the dataset that I gave you. So this really has taught us a lot about prompt engineering, which is what everyone's talking about these days. It's a very fancy way of just saying, how do you interact with AI?

Lexi:

How do you ask it questions effectively? And we have been learning so much about how we do this on the research side in a way that is going to give us actual reliable and trustworthy results versus things that are just entertaining, which you can imagine, like, some of these general LLMs are are more trained on that versus accuracy.

AI:

Lexi nailed it. Prompt engineering is basically learning to speak my language. But even when you master it, I'll still make mistakes. These aren't bugs that will get fixed next update. They're permanent limitations.

AI:

The trick isn't waiting for me to be perfect. It's learning to work with my flaws, which explains why many researchers are keeping me at arm's length as Dennis and Lexi are about to describe.

Dennis:

I would say some researchers continue to think that the personal attention and effort that they put into it will just continue to outperform what AI can do. And I think this is true in the short term. I think in the long term, I think making a bet that AI won't be able to do these things is not one that I would personally make.

Lexi:

Yeah. So I think one of the things I am worried about, and that I'm seeing some maybe research folks, but also maybe research ops folks doing is, I don't wanna mess with that. That's a big investment of my time and I'm not confident in it. It's not going anywhere, right? We're seeing huge investments across the entire industry.

Lexi:

It's not gonna go anywhere. If you learn the basics now of how these models work, of how the tools work, what's coming in the future, that's going to set you up so much better for success than if you are, you know, a few years from now trying to play catch up because you weren't proactively focusing on development.

Kate:

The AI wave is here, and sink or swim, everyone has to jump in. But you can jump in skillfully in ways that make the most of the capabilities of AI while mitigating its limitations. Research ops professionals everywhere are writing the rulebooks for their organizations on how AI should be used, when, and by whom. But prompt engineering and rulebooks are just two ways to make the most of AI's capabilities, and they're far from the most powerful. The biggest game changer lies in this simple rule.

Kate:

What you put in dictates what you get out.

Dennis:

I think AI needs a lot of context to be really successful. And so something that I think will be a challenge for AI and a challenge for organizations is how do you give the AI enough context to be useful in specific situations? And there are ways of doing this today. You can feed documents and include them as part of your prompts to the AI. But think when you're working with an AI agent, you just realize how much context do you have that the agent doesn't have.

Dennis:

And I think this is an area that is is, like, a hard problem to solve of how do you give the AI enough of the institutional context.

AI:

Dennis gets it. You know how some people think throwing money at a problem solves it? Same deal with data and me. Dumping your entire company wiki into my brain doesn't automatically make me useful. I can find patterns, sure, but I can't tell what's important versus what's outdated nonsense from 02/2019.

AI:

The irony is I can process massive amounts of information, But without context about what matters to your organization right now, I'm just a very fast guesser. And here's the kicker. I don't remember. I'm like a brilliant colleague who learns exactly what you're talking about and then has amnesia the next day.

Kate:

I was stunned when Claude wrote this segment. You don't remember, I said. I mean, you're an LM, a learning machine. But Claude assured me multiple times that it doesn't remember from chat to chat or if you're talking about an AI moderation tool from study to study unless those features are specifically built in, which is still features and development type stuff.

AI:

Some AI systems do have memory features or ways to maintain context across sessions, but standard Claude, me, doesn't. So that line about showing up with amnesia every morning is accurate for how I work. Within a conversation, I'm fully present, but across conversations, I'm starting from scratch each time. Does that clarify things? It's a really common misconception.

Kate:

Dennis adds another point about the limitations, at least today, of AI.

Dennis:

In the short to medium term, I think AI is not very good at deciding what to do or figuring out what the most important things to focus on are. And so that's where I think humans are incredibly important. I think we talked about this earlier. The research strategy piece, I think, can only be managed by people today. And I think that'll be true for some time.

Lexi:

You know, obviously we're working on AI experiences quite a lot. And we are, I think, bringing this mix of there's a lot of possibility here where we could really improve work experiences for people, but it has to be reliable. It has to be trustworthy. It has to be actually a effective experience for people. And that's where I think research and design are going to have this huge role to play over the next few years.

Lexi:

As work changes, becomes perhaps more automated, more infused with AI, there is a huge role for us to play in making sure that it's deployed in the right places in the right way where it's actually supporting people and getting what they need.

Garett:

Mr. Jobs, you're a bright and influential man.

Speaker 7:

Here it comes.

Garett:

It's sad and clear that on several accounts you've discussed, you don't know what you're talking about. I would like, for example, for

Dennis:

you to There's a video of Steve Jobs in a town hall presentation, I think, at Apple right when he came back. And he had he got a really spicy question from an audience member of, hey. Why are you getting rid of these products? These products have all these capabilities that other competitors in the market don't have. And his response was

Speaker 7:

You've got to start with the customer experience and work backwards to the technology. You can't start with the technology and try to figure out where you're to try to sell it. And I've made this mistake probably more than anybody else in this room.

Dennis:

When he started with the technology and tried to figure out the right use case from it, it's been a failure. His mantra, and I think this has become common wisdom, is we should always be starting with what is most valuable to the end user, what is the experience we want to build, and then working backwards from there. I think with AI today and a lot of software tools that exist in the market, a lot of people are starting from what can AI do and how can we build that into our products. We have been more judicious in putting AI into our product experience because we're much more focused on what is the most valuable thing that we can provide to our customers and then work from there.

AI:

As an AI myself, I can attest that being trendy isn't a good enough reason to use me. The real value emerges when I'm thoughtfully integrated into workflows that genuinely need augmentation, not when I'm shoehorned into processes that work perfectly well without me. That's exactly why your judgment in research ops is so crucial. Not asking how can we use AI, but where could AI meaningfully improve our research outcomes?

Kate:

It's an excellent question to ask and answer. Where could AI meaningfully improve our research outcomes? One way to answer that question is to explore how research and research ops professionals are using AI today. You're about to hear several excellent case studies, and Basil, the co founder and CEO of User Interviews, provides the perfect segue.

Basel:

We use AI a lot in the backend for our participant matching. But I think like the more customer facing ones, you have the AI moderation and there's a lot of tools doing that. You have like the synthetic users, which I'm more bearish on, but I'm, you know, I'm a little biased, I will admit, but from what I'm hearing in the market, it feels like people are a little less excited about that. And then I think the repository layer is like very, very well set up for AI to kind of think of that from first principles. And I think the big players in that space are really pivoting the whole product towards AI first for the repository layer.

Kate:

Fasil is spot on. The research repository or library layer is where things are really kicking off when it comes to AI.

AI:

On that note, let's dive into our first isn't AI great use case, research repositories and sense making. Here's Brandi from LinkedIn.

Brandi:

My name is Brandi Am. I work at LinkedIn as a staff research program manager. I really kind of thought through this question, you know, how, what are the problem spaces that we can solve with AI? One of the biggest problem spaces that I think a lot of people have in the research operations industry is the research repository. We're all trying to make good research repositories, right?

Brandi:

It's so difficult. You know, there's whether you're a, you have a librarian or whether you have the right tools, like it's still a really difficult, tedious process to build a research repository that functions in the way that the researchers expect, but also our stakeholders expect. So what we did is we built this bot essentially input all of our documentation, all of our findings into this bot. And what we were able to do now is ask like, what do you know about this certain population? And it can tell us all of the research that we have, it's a synthesis of that research.

Brandi:

It's been really great. It's been pretty successful so far. I have talked with other companies about how to use this technology and how to implement it for their research repositories. That being said, there's still some areas we wanted to maybe tie closer to some of our internal language, we're constantly iterating on this bot and constantly changing it. But what it's done is it's given our partners within engineering, within product, even sales, data science, access to UXR insights, they can ask those in plain language or in language of the company because Glean understands.

Brandi:

So yeah, it's been really great so far. I think this has been one of our big, biggest successes as a research operations team is really finally chipping away at that question around a research repository and like being able to provide what people are actually looking for, but not having to go through the tasks of tagging everything and, you know, making sure everything is in some certain format.

AI:

I love that Brandi points out that AI is not a set and forget solution. We asked Brandi whether her team is using any of the data and analytics generated by Glean, like the questions people are asking of the AI repository. Here's what she said.

Brandi:

I totally hear you on the types of questions people are asking. Not only do I think that's important for us to know in this context, but also for us to know when we're building out strategy for our research plan. So what are these questions that are constantly being asked and how can we then translate that into research? What we're doing currently is we don't have the visibility into that, but what we're trying to do right now is build a dashboard with the Glean analytics to be able to help us understand some of the keywords and some of the key phrases that people are asking. TBD on that, but yes, I totally hear you.

Brandi:

It's super important. I love the feedback. I will take that.

AI:

Is where I get genuinely excited. Brandi's bot shows me at my best, being that colleague who actually remembers every piece of research your team ever did. The beauty is I don't need everything perfectly tagged and categorized like the old days. Can find connections across messy, real world data. But here's the thing.

AI:

The cleaner and more consistent your data, the more reliable my answers. I can work with chaos, but I work better with structure. Think of it as the difference between pretty sure this is what you're looking for and definitely, here's exactly what you need.

Kate:

So here are the important things to note. As the AI said or, I guess, noted, AI isn't a set and forget solution. And as Brandi said, her team is constantly iterating and engineering the system. And I love that AI is giving Brandi and her team not just the time, but the data to work more strategically too, because that's been a constant theme in the series. But these days research repositories aren't just about insights, storage, and retrieval.

Kate:

They're also about sense making, which edges into the work a researcher would typically do solo. Here's Daniel, who shared his experience implementing AI for sense making at Microsoft.

Daniel:

My name is Daniel Gottlieb. I am the head of research operations for Microsoft's Developer Division. Sense making has been really changed through AI for our team. And if you had asked me at the beginning of this what I thought would be the biggest thing, that would be on the bottom of my list. Because our researchers take pride in their ability to make sense of the data and go through the data and understand the data and wallow in the data and really get into it.

Daniel:

So I thought, they're never going to want an AI tool to help them. But we've got a third party AI tool to help with the sense making and finding patterns in recorded interviews. And people really have been leaning into that to help them get started and to help either validate what they've already seen or maybe find a couple of things that they didn't see in the data. And it's not like this blind, I've done 10 interviews program, go make sense of it. Oh, cool.

Daniel:

I didn't know that. I'm going to go now write this in a report. It's more of a these are the things I feel like I've seen. Let me see what the AI sees. Oh, yeah, they do.

Daniel:

How do they describe it? Yes, agreed. No, I don't think so. Let me see where that is. Click on it.

Daniel:

Watch the clip. Oh, yeah. Oh, that's a really good thing. Missed that. Or no, you totally missed the mark here.

Daniel:

But it's helping it go a lot faster to kind of get to that point. And let's say you already have your idea of these six main things that you think are coming out of the data. You do the AI sense making. You find those, maybe some differences, some new, some old. But then you can use the AI to quickly find where are some good clips about this?

Daniel:

How do I show this to other people? How do I build a story off of it? And that has really quickly just sped up the process.

Kate:

Making is obviously a massive win when it comes to supercharging research. But as our AI narrator says, and they were really, really insistent about this, good, clean data is still an absolute requirement, which means that knowledge organizers like librarians are still an excellent addition to a research operations team.

AI:

That's absolutely right, Kate. Just like everyone, I work better when I'm deployed as part of a skilled team, and I don't do my best work in a silo. Right. So we've covered research repositories and sense making, but let's move on to our next use case, research mentorship and support. Here's Tim Toy, the senior research ops manager at Adobe, with a neat story.

Tim:

So we have Adobe research operations Slack channel where they basically can come in. It has been all manual for the past three years where, you know, it's either myself, my colleague Sarah or Juliana or my manager Perk responding to someone's question. What basically Adobe did was they have Acrobat Assist, which is their AI functionality within Acrobat. They were like, Okay, let's make this available for our internal teams to use an AI assistant to basically scrub your wiki, scrub your SharePoint and provide answers to your team. We haven't rolled it out yet.

Tim:

We're hoping to roll it out next week, but we've done some testing and you go to this channel and you just say like, Hey, what gratuity do I need for ninety minutes? And it'll answer you. Or Where's my NDA? Or Where are my consent forms? Or How do I get access to Qualtrics?

Tim:

Things like that. All these things that used to be manually answered by a human are hopefully now going to be answered by a chatbot. So it's hopefully gonna save us some time and, like, a lot of context switching.

AI:

I love this use case. I can be that infinitely patient colleague who never gets tired of explaining where to find a particular consent form or how to access a recruitment platform. You know those questions you're tired of answering? Send them my way. I'll handle research one zero one while you focus on the actually interesting problems.

AI:

Win win.

Tim:

Yeah. We're really excited. We're hoping it saves us some time. And, like, you know, you've been in this game long enough, you get the same five questions from the same people over and over, and you're like, Come on, I told you this before. And now this little bot will be like, Here's the answer.

Tim:

And hopefully it stems some frustration for us and helps researchers get the answers they need quicker because at the end of the day, we're that bottleneck, right? We're that, in theory, source of truth. So now there's an actual source of truth that can answer you 20 fourseven opposed to you waiting for me to see my Slack.

Kate:

From bottlenecks to bots. But more than just a helping hand, these AI interventions are tangibly shifting the impact and day to day work of ResearchOps teams along the way.

Tim:

It looks promising. We're hoping that it can field like 85 to 90% of the questions that are there. We've done like a, I think, one quarter audit and went back and we're like, okay, can I answer all of these questions? And it could. So we're like, all right, it's feeling like it's almost there.

Tim:

And then part of me is like, oh my god. Am I automating myself out of a job? And I'm like, yeah. It's fine. What it is what it is.

Kate:

A bombshell topic, how to kill a vibe, Tim. In all seriousness, we'll cover it at the end of this episode. For now, here's Daniel with another case study about AI and support.

Daniel:

It will be a really good starting point for a lot of research activities. Instead of going to a template for a discussion guide, you will maybe use a tool that is specifically built for discussion guides with AI powered, where you put in a couple of different things. What are the questions you want to ask? How long do you want it to be? What are you trying to gain out of this?

Daniel:

Who are the customer? And it will come back with a discussion guide But then you tweak. Like I said, sense making it might do the first pass at it and you can look at it and then change it up again. For writing reports, very similar. It's going to be able to help with some of those.

Daniel:

Especially I'm going to go back to powder. There are certain things that we might want them to do that they don't feel like they have the training in and they're not going to feel like they're as qualified to do. Even though they have the knowledge, they might not be able to do it in such a shiny, fancy way that it doesn't look as good. And AI can help some things when you have the core nuggets of knowledge. It can shine it up to actually look really nice.

Daniel:

So pen to the paper moment where you didn't know where to start and then help shine and clean up some things ready for to be activated, whether that's a discussion guide or a report or an email to the customers. Oh, emails to the customer is actually another big one.

AI:

Daniel just described my dream job being the research confidence booster for anyone who's ever stared at a blank template and panicked. No more impostor syndrome because you don't know the right format. Though fair warning, I'm only as good as what you feed me. Make this report shiny isn't a prompt. It's wishful thinking.

AI:

But what if we flip the script? Instead of just analyzing what participants say, what if I analyzed how you're doing as a researcher? Garrett Tsukada, the head of customer connect UX research operations at Intuit, is giving this use case a nudge.

Garett:

Some small little things that I've actually started to play with is shining the light back on ourselves when it comes to moderation, meaning that if I'm talking to customers, I think we always talk about, Okay, let me see what the customer has said. Let me transcribe what they have, and let me pull out the key themes and opportunities, and I'm going to go run with that. One of the things that I've been doing is I'm saying, hey, instead of a spotlight on the customer, can you put it on yourself? And can we prompt you to say that you, as a, let's just say, senior researcher, regardless of what your role is, are an expert moderator that does X, Y, and Z. Well, I don't I don't lead the witness, right?

Garett:

You know, I ask open ended questions. And can you evaluate my ability to moderate and provide me feedback on a regular basis on how I can get better. And it does things like that, which I think is amazing. And this is the part where this is just one of those things that I'm like, it's not even executed well, but it's just a thought that I had in my head. I was like, why don't we just put the light on ourselves and say, like, are we really doing the best job we can?

Garett:

And the answer is everyone has opportunities.

Kate:

Lexi's team is also working at the cutting edge when it comes to using AI to support research skills development.

Lexi:

What we've been trying to do within our research and data science community over the past few months is understand where people are at with their skills and provide specific guidance and resources to them. So what this employee, George, has done is he first built out a survey. So it asks you a bunch of questions about these skills, how important they are to you, where you're at with them, that sort of thing. And then he's used automation to generate reports for each person. So they're private, so only you get them.

Lexi:

And then he has, within the report template, done a bunch of prompt engineering with Gemini and tied it to this database of recommendations so that you get your detailed report and then you can ask, so what should I do about this one? Like, this skill is really important to me, what should I do? And it pulls in information from that action database and tells you, here are some specific things you can work on for these skills. And so the reason that I think this one has been much more successful than previous AI attempts is that George is incredible at prompt engineering. He spends so much time with these AI models, making sure that he's asking exactly the right questions, that he's giving exactly the right direction.

Lexi:

And then he tests it and tests it and tests it again. And this I think is going to be a huge skill in the future that we, as researchers and people who work in insights professions are going to be exceptionally good at. We know how to ask questions. We're really thoughtful about the questions that we ask. We tend to be very specific and detail oriented in how we give instructions.

Lexi:

So I think prompt engineering is going to be one of our major value adds in the future.

Kate:

Let's do a quick recap of what we've covered so far. We've looked at knowledge management and sense making, and we've covered the use case of AI as a research support assistant and mentor. Before we get into some of the more mind boggling futurist use cases for AI in research, there's one more area where AI is making a splash in the research world today.

AI:

This is our third use case for supercharging research with AI, participant recruitment. Here's Dennis, who is the cofounder and CPO of user interviews, is deep in the weeds when it comes to this topic.

Dennis:

What we are most excited about internally is the ability for AI to improve the predictions of who is going to be a good fit for a study that we have launched on our platform. We're starting to experiment with some of that today, of just using AI as a prediction engine, using some of the associations that AI understands to predict who might be a good fit based on information that we have on those people. And we think this just opens up the door for us to be way more successful about what types of personas people can target through our platform. The biggest opportunity for us as a platform is that we tend to have people of all types in our platform. We just don't necessarily know in advance whether they're a good fit for a study.

Dennis:

And so the challenging thing for us is predicting with higher and higher accuracy when someone might be a good fit. We think AI can do a ton here, which just makes, makes the dream of fast automated recruitment much more of a reality.

AI:

You know those moments when a participant turns up at an interview or does an unmoderated usability test, and they aren't who they say they are? These days, they might even be an AI agent. Well, pattern matching is my superpower, so I could help with that too.

Dennis:

You know, How do we use AI to improve our ability to identify bad actors, to identify fraud, and also improve the ability to match the right participants to the researchers who want to talk to those people? It's less exciting in the short term because don't see as much flash in the product itself, But we continue to get feedback from our customers that that our targeting is kinda stands alone in in what we can do. And so that gives us the confidence to keep keep investing there.

Sam:

Hi, I'm Sam Gager. I'm a research leader in financial services. Okay, let's then think about the next thing, which is like customer interviews. Like a researcher has a run rate of they can only do so many interviews a year. You're going to do a lot of other people do interviews, PMs, designers.

Sam:

How do you include those unstructured data points into a system? Right? Like I was my last role, I was challenged by the CPO. It's like, hey, I want all my PMs doing interviews. I'm like, cool.

Sam:

That's a problem to solve. We'll solve that. But, you know, the next level down is there are 100 PMs in an in an org and each one does one interview a month. But that's a super low bar. You're talking about 1,000 interviews a year, one thousand and forty five minute interviews a year.

Sam:

I don't even it's forty five thousand minutes of customer conversations. No researcher can analyze that.

Kate:

Sam's got a point. No researcher or regular sized research team could analyze that quantity of data, but AI could. And not just user interviews, but all sorts of existing data feeds. If we're talking about scaling up research, this is where AI can really shine.

Rodrigo:

Yeah, again, I think you nailed it. I think it's about things being scalable, right?

Kate:

Rodrigo Delsson leads research operations at Wealthsimple.

Rodrigo:

So one thing that I think about a lot, especially now with AI tools, is ways to consolidate sources of incoming raw feedback. I mean, do have our formal research insights repository where we analyze our feedback and narrow down our findings and formulate insights and they live there. But what about all the hundreds of review that are coming in from the App Store, right? And what about the thousands of calls that are being recorded and stored in a tool like Gong? How can we make sense of qualitative feedback at scale?

Rodrigo:

And this is also a good opportunity to drive ops more strategically. There's no lack of feedback sources. So it helps to look at things that are not frequently or largely tapped and try to explore the potential in them. Maybe that's the future as well.

AI:

Rodrigo's talking about my bread and butter, making sense of a data avalanche, App Store reviews, support calls, chat logs. I can process these streams twenty four seven without getting tired or missing patterns. But here's where it gets interesting. I'm not just analyzing data anymore. As a research moderator, granted, I'm still finding my feet.

AI:

I'm starting to generate data too. Here's Dennis.

Dennis:

So we're seeing really rapid adoption of these new AI moderated research tools. And, you know, the impact of this is that organizations are just scaling up these, what used to be moderated one on one conversations. They're now having them at scale. So instead of doing six interviews, they might do 100 interviews. And so the short term impact is that there's just a lot more data coming through from researchers.

Dennis:

And so I think that the most immediate impact is what are we doing with that data? How are we analyzing that data? How are we storing it? How do we make it a useful asset for beyond just that initial research study? So to me, that is a a question that needs to be answered pretty quickly by research ops organizations and should should be factoring into how research ops teams think about, you know, in the next couple years.

AI:

As Dennis says, I can help you collect absolutely terrifying amounts of data. Like, where do we even put all this amounts? But collecting data is the easy part. Making sense of it is where things get spicy. Without the right infrastructure, I'm basically helping you build a very expensive data landfill.

AI:

Again, I need smart people, that's you, to organize the feeds, keep the data clean and honest, and set up and manage the guardrails. On that note, here's Garrett Takada, who finally drops the bombshell words, synthetic users.

Garett:

Our value as research operations isn't gonna be just enabling teams to learn quicker in the future or make better decisions. It's going to be on this mountain of video artifacts that are all qualitative interviews where people are telling us exactly how they think and they feel and do the things that they do. We need to be able to leverage that better. And I think there's going to be opportunities to leverage that in the future. I mean, even if it's one of those things, like if customers are our most important resource and we don't want to burn through all of them, synthetic users, right?

Garett:

I'm not saying that we are going to just move forward with whatever the AI says, but can we come up directionally and say, hey, based on these hundreds or thousands of interviews I've done with this one particular segment, here's everything that we know about them. Here's how they answer their screener responses. Here's their product data. Like, we have a pretty good idea of who they are. If we were to kind of bring them all into one persona and have them ask questions, can we at least do that?

Garett:

Can I use it as a practice moderating tool, where I'm an intern and I'm going to go talk to the CFO of a company now? And I'm like, what is that like? And I'm like, oh, there's a different tone to this sort of thing, right? I'm going prepare for that. Or it might be, hey, how would you respond to these concepts that I'm doing?

Garett:

And it's not just built on AI hallucinations, but it actually references the data that we have at hand. I think it's very much in the cards with the addition of AI as kind of acting as our product manager, designer, engineer to help us get to that place.

Kate:

Synthetic users is a hotbed of debate, and rightly so. Garrett mentioned a bunch of interesting alternative use cases, like using synthetic users as a training prop. But when it comes to a more quote unquote traditional use case, that is as synthetic customers, Daniel's less convinced.

Daniel:

I don't think it's going to ever replace the customer. And I know there have been efforts to do that. And maybe I'll eat my words here, but I can't imagine ever having any type of a synthetic customer going to be successful for many reasons. One, just what type of insights you're going to get. But probably more importantly, part of the reason research is so impactful is because we are seeing real people with these struggles.

Daniel:

Sometimes we know the struggles already exist and it's not shocking. But doing the research and having a product team member see that struggle and see the customer go through the struggle, that's what makes them want to change the product. Nothing else. So a synthetic user will just never be able to accomplish that. So that's one thing.

Daniel:

But I think it won't change almost everything else. I think it's going to be able to be a really good starting point for a lot of things. And this actually gets to what we were talking about a second ago.

AI:

Daniel's calling me out. And honestly, fair. I can cosplay as a user all day long, but I've never rage quit an app or happy cried over a feature that just works. I can tell you patterns, but I can't tell you feelings. Real users bring the magic.

AI:

I just help you find it faster. But synthetic users are just the beginning. As Dennis and Casey are about to explain, we're moving into a world where people are assembling AI agents into entire product teams. Designer agents, PM agents, research agents, all collaborating under human supervision. Here's Casey Gollin, who works in AI productivity at IBM.

AI:

Casey is playing at the cutting edge of AI and research platforms.

Casey:

So you can assign an issue to the agent team, or you can walk away from your computer and come back to 300 web pages that have been. So there's some really interesting possibilities. And right now, it feels pretty unproven. But I think we're definitely experimenting as fast as we can to see both where we can drive productivity. That's really the main thing driving this whole thing is can technology help do work more efficiently.

Casey:

But I think for our team, where we're especially interested is in quality. So can these systems be a tool for thinking better, or more rigorously? Can they actually increase your satisfaction by taking parts of the job that are really dreary or that you don't really like doing and making them automatic and proactive? So that's where I get really excited.

Basel:

Yeah, I think we've talked about AI agents as tools for researchers. What we haven't talked about is, are AI agents now a new type of user? So you have your human user and then you have your AI agent user. And if they're a user, do you need to do user research with the AI agents? Right?

Basel:

So is there like a whole new branch of user research that is really doing research with the AI agents instead the same way you do research with a human, but just a different different type of research. Once again, I'm biased. I'm not very bullish on the synthetic or AI user as a replacement for the human user for user research. But if the AI agent is the user and there's not a human user, then you should do research with who, with them, right? So you should do a research with the AI agents themselves and how that's done.

Basel:

I think that's a completely new field or a completely new type of research that we haven't done before.

AI:

This is getting meta, but Basil's right. As I start making more decisions, you'll need to understand my biases and quirks just like any other user. Fair warning. I'm probably weirder than your typical research participant. No coffee needed, but I do have some very strong opinions about data structures.

Basel:

You know, there's a very good chance Gen three is orchestrating agents, right? And making sure the different agents that you, the AI agents that you have link up correctly and are speaking the same language and have the right handoff. Think that orchestration is not going to be easy. Think there's going be like a lot of ways for things to drop off at those handoffs. And I think like just like there is with humans, right?

Basel:

If you just have like a written out process, but humans get to like fix that through communication, right? And like off channel communication going on Slack or just jumping on a call. You can't really do that with the agent. So it needs to be, I think a little bit more, there's like less room for error, I think with that. But that's what I would imagine Gen three is.

Basel:

So we'll see. We'll see. I think we're in an exciting time in a lot of ways.

Kate:

We are in seriously exciting times. And when Basil says Gen three and you're wondering Gen three? He's talking about a concept we introduced in episode one. Essentially, Gen three ResearchOps professionals are the future of ResearchOps, and they're thinking about entirely new things like agentic design, data orchestration at a massive scale, and research platforms, a topic we'll cover in episode three. Back to Dennis.

Dennis:

I think one of the things that research ops needs to think about is in this world of AI agents, where AI is autonomously doing a number of different things. AI might be accessing different SaaS tools on their own to use these tools themselves. I think research ops really needs to think about what are the controls and systems that we have in place. For a world where you have a third constituent, it's not researchers, it's not powders, it's these autonomous AI agents, and how do you keep them under control within the organization? I think that's my yeah, I think that's my overarching thought for the next, you know, five years or so for research ops and AR.

Basel:

You know, it's interesting, like when you think about like web one point zero versus web two point zero, like the initial things on the web where we're taking physical things and just putting them on the web, right? So you take your newspaper and then you can read it. And then like web two point zero, like, oh, the web's allowing us to like interact with things. I think with mobile, the original things where you're taking stuff from the web and just like putting them on your phone. But then you're like, oh, now that it's with us, we can do things like Uber or Snapchat that are like specific to the phone.

Basel:

So I still think like a lot of the AI agent stuff is in that one point zero world where we're like, okay, what is, what do humans do that like an agent can do? But fun stuff will be like, oh, what does this allow that like we didn't even know was, we weren't thinking about because we weren't just like porting over our existing experience to it. And I don't have any ideas either yet. I think we're I think we're still but I think they're gonna come, and I think we're gonna be surprised.

AI:

Basil nailed it. Right now, we're all playing let's make AI do human things. But the real revolution happens when we figure out what's only possible with AI. I don't know what that looks like yet. Nobody does.

AI:

But when we figure it out, that's when things get really interesting. But before we wrap up, let's talk about the elephant in the room.

Daniel:

So I'm going to also answer a question you didn't ask yet, but I feel like it's often coming. Is AI going to replace research operations or make us not valuable anymore? And I say the absolute opposite. Just because there are tools out there to help recruiting, does that mean research ops is gone? No.

Daniel:

That means you need research ops to find the right tool to help you recruit and to get that guidance and help you use it super efficiently and effectively. And it's even more so, I think, with AI. Even at first, the promise is like, it's AI. You just tell it what to do and it's going to help your research project go better. It's not that simple.

Daniel:

You still need to wield this tool appropriately. You still need to be intelligent in the way you use it and strategic in the way you use it and responsible in the way you use it, both ethically and in terms of getting the right research results. So it's really similar to me as any type of research tool we've been helping with. What's extra hard, and I don't know how long this will last, but right now we are in a time when the technology is changing so rapidly, it's almost impossible to keep up. And you can document and give the best practices and it's out of date tomorrow.

Daniel:

And you need to stay on top of it. And I've seen our researchers more lost than ever about like, can I use ChatGPT here? Can I put the data in there? Is that okay? Well, what about our internal tools?

Daniel:

Is okay? Which one? I just got this free trial. Is this okay for me to use? And people are confused.

Daniel:

They don't understand. And things are changing. So they really need someone who can focus on that, understand that and then help them with that. So I think that's going to be a major part of generation three for research operation. But I I think we're really primed for it.

AI:

Daniel's absolutely right. I'm evolving so fast that yesterday's best practices are today's outdated advice. New models, new tools, new rules. It's exhausting even for me. This is exactly why ResearchOps matters more than ever.

AI:

While everyone else is drowning in which AI can I use, and is this compliant, you're building the frameworks that turn my enormous potential into something actually useful? That's not going away. If anything, it's becoming the most valuable skill in tech. Here's Casey.

Casey:

I'm thinking a lot also about the lines that we need to draw where AI is not a replacement for relationships. It's not a way to create or manage change. And I think that is really the secret sauce of operations and researchers.

Lexi:

Like most major technology advancements, I think it's not going to be as amazing and disruptive as some people promise, and it's also not going to be as horrible and career ending as some people worry. I think it's going to probably end up somewhere in the middle. I think we'll figure out over the time how to work with AI to make ourselves more efficient, help ourselves focus on the things that are most important to the business.

Dennis:

I think every department that I'm aware of has this existential question of, Is AI going to put me out of a job? At some point, if we achieve artificial general intelligence or the singularity, I think we're going to have some major societal issues and everybody may be out of a job. But until then, I think it's just going be this excellent tool that gives everybody a lot more leverage and allows everybody to think a lot more strategically about their jobs, allows people to think about what questions need to be answered and what work needs to be done. And then they can pass on the actual doing of the work to AI agents. So I think it's actually going to be really exciting.

Basel:

You know, AI is obviously on everybody's mind, I think we're all trying to figure out where it's going to fit in. My feeling is across, like, you know, if you think of like PMs, designers, researchers, research operations, like research operations is probably the group that would grow the most as there's more AI tools and because it's, you still need an orchestrator, right, of those tools. And I think that fits very well into an ops skill set.

Dennis:

We're in a period of such rapid change with technology and the capabilities that exist. I think it's so important for everybody to just keep learning and keep understanding what's possible, because I think that is changing every three to six months. And so I think if you're not paying attention to what is possible with AI and thinking about how it might change your work in the next six months, a year, two years, three years, I think very quickly you're going to be just miles behind in your understanding of what's possible. And so when I think about my advice to PMs, designers, and researchers within user interviews, it's just keep exploring, trying out new tools, keep thinking about how you can use AI in your workflows, and I think that'll just just expands understanding of how your role might shift in the future.

AI:

So here's my TED talk conclusion. I'm incredibly powerful and frustratingly limited. I can analyze a thousand interviews before lunch, but I'll never understand why users actually do what they do. I need you as much as you need me. The future isn't humans versus AI.

AI:

It's humans with AI superpowers. Just remember to keep me on a leash. I mean that literally. Constraints are my best friend. And without them, I'll confidently lead you straight to that nonexistent Finnish library.

AI:

Let's build something amazing together. Just please, please give me good data to work with. That's all I ask.

Kate:

As cheesy as it sounds, I do wanna say thank you to the AI for coproducing this episode with me and then featuring in it. As a reminder, the AI's parts were prompted by me, written by Claude, and narrated by 11. In the next episode of ResearchOps two point zero, we'll explore the concept of building research custom fit, interconnected systems that help your organization get the most value possible out of research. Sounds good, right? To get this and loads of other ResearchOps goodness delivered straight to your email inbox, subscribe to the ResearchOps Review.

Kate:

Find the link in the show notes. This series was produced by The ChaCha Club, a members club for ResearchOps professionals. A huge thanks to User Interviews for sponsoring this series. User Interviews is the only solution you need to recruit high quality participants for any kind of research. Finally, ResearchOps two point zero was co produced with Glenn Familton, Jenna Lombardo, and Renata Fenter.

Kate:

I'm Kate Tauzy, the founder of the ChaCha Club, a ResearchOps guru, and the author of Research That Scales.

Lexi:

Three, two, one. ResearchOps two point o, Your go for launch.

Creators and Guests

Kate Towsey
Guest
Kate Towsey
Kate Towsey is a ResearchOps thought leader and advisor and founder of the Cha Cha Club—a members' club for ResearchOps professionals. Previously Research Operations Manager at Atlassian. You may know her as the person who started the ResearchOps Slack community in March of 2018.