#132 - Unveiling UX Insights with Competitive Research with Rachel Miles, Research Lead at IBM
E132

#132 - Unveiling UX Insights with Competitive Research with Rachel Miles, Research Lead at IBM

Erin - 00:00:31: Hello everybody and welcome back to Awkward Silences. Today we are here with Rachel Miles, who's a UX research lead at IBM. Today we're going to talk about competitive research, which should be a fun topic. I was just at my daughter's field day yesterday and disappointed they didn't have a winner. We need more competition in life, so I'm excited to talk about competitive research. Thanks for joining us.

Rachel - 00:01:01: Thanks for having me.

Erin - 00:01:02: I've got JH here too.

JH - 00:01:04: Yeah, I was talking to somebody on the product side about drawing inspiration for competitors recently and it's always kind of a hot topic. It's like, don't chase your competitors, but you do need to know what they're doing. And so I think hearing about it from the research side sounds very interesting to me.

Erin - 00:01:15: Yeah, awesome. And Rachel, I know when we're talking, you're saying, this is not something you've spent your entire career focused on. So maybe we could jump into what kind of brought you to learn about and want to focus on competitive research in the first place.

Rachel - 00:01:29: Yeah, so I used to work in the Hybrid Data Management space at IBM, really focused on data collection and databases and that whole area. And towards the end of my time in that role, the product manager I worked with wanted me to look at the competition of our competitors during the desktop installation process. And so I looked at three or four competitors and saw just how much improvement we really needed to have on just that one experience. And it was really interesting to me and I really enjoyed doing that. And then I did a stint in research ops and then I decided I wanted to be a full-time researcher again. And this was something I talked with my vice president about being interested in. And we have a vice president of user research, we call it customer insights, and he's very interested in having a way for us to more formally be able to measure how well our products are doing. We do have internal design and UX reviews and it's kind of like an expert review where we have internal people grading all of our products and there's a quality check. Every time a developer, like we have a new product release, we have those quality checks, but those are internal measures. And we wanted to match that up with external measures. And so we created this function and it's been exciting so far.

JH - 00:02:53: Nice. When you think of competitive research, like what does that actually mean? I can imagine it's like going through a competitor's product and taking lots of screenshots and notes about the experience. It could be talking to users of a competitor's product and seeing what they like or dislike about it. It could be pricing stuff. Like do you do all of that? Is there a certain flavor of it that you focus on?

Rachel - 00:03:12: I think that's an interesting question because there's a lot of different terminology that gets thrown around with competitor research. There's a lot of different ways you can do it. I've heard people talk about competitor profiles, and there's competitor profile, competitive analysis, benchmarking, evaluation. There's definitely more than that. But I've created sort of a distinction between them that people may or may not agree with. But in my eyes, there's a competitor profile, which is like a high level snapshot looking at the competitor landscape and you look at their marketing collateral. I guess that's really high level. You're just looking at their website and their marketing collateral, like kind of their online presence. And then a competitive analysis goes a step deeper than a profile where you look at secondary research as well, maybe like analyst reports and you do more of a breakdown of the competition. You have those charts you see a lot where you have those feature breakdowns of what they have and you can compare them. And then what I do is specifically competitive benchmarking, which is going into, we do two products. We pick an IBM product, and then we pick a key competitor. And with predetermined metrics, we compare the success of those products over time. And then I guess another type of competitive research could be a competitive evaluation, which is kind of similar to competitive benchmarks, but it might be a little bit more flexible in how you conduct it. But these competitive benchmarks and evaluations are, at least in my world, very specifically focused on having users go through the products and being able to compare how they do with the product versus our product. And if you can't have a customer go through the competitor product or user analysis, then you can do a competitive analysis. And then the last two or more UX focused and having that in depth look at the product experience, whereas a competitor profile and analysis is more high-level looking at what they say externally.

Erin - 00:05:17: Yeah, awesome. You can imagine that there's value in having all of those things together, right? On the profile side, how are they talking about themselves? And you might not even want to put a competitor on your list to look at if they're positioning themselves, you know, for example, in a category you don't want to play in. Like sure, that's a database. Which leads me to my next question, which is, how do you choose which competitors to benchmark against? Because I imagine all of this could be a significant amount of work and you want to be mindful about how many and which type you're choosing to benchmark against. So how do you make those decisions?

Rachel - 00:05:50: I think, I mean, I could give you the ideal answer and then the real answer. I mean, I think the ideal answer is that you'd have some clear measures to identify which is the competitor to go against. But usually, if you're working in a company, you're playing a little bit of a political game and you're trying to, well, first, I mean, especially in UX and design, we still find ourselves advocating for the value of the work that we do a lot of the time. And, you know, there's some people completely bought in, but then there's some people who've never worked with UX and researchers before, so they just don't know. And so there's that balance of, okay, were they interested in knowing about what are the ones that the product managers are always talking about? And sometimes you're like, is that even the right one? Why are they so fixated on that one? Right. Sometimes you have to just actually do the research on that one, and then maybe add another one as well. Or it kind of is a case by case basis. But usually, when I'm coming into a product, there's usually like a clear one that everybody is really focused on. And then there's nice to haves, like other ones that are good. But there's usually somebody who we call the gorilla that has the most market share and the one that we're really like watching. Sometimes there's like niche up and comers that might be worth evaluating. But as you said, these are a lot of work, a lot of time and a lot of investment. So there's a balance.

JH - 00:07:18: And do you get some of that too, some of that sense from your own user base? When you're just talking to your own users, you hear them mention, oh, there's something interesting with this tool or whatever. Does that somehow maybe inform what's on the radar and things like that as well?

Rachel - 00:07:30: Yeah, that's definitely happened in research studies that, especially when I'm using a tool like User Interviews when recruiting participants and they're not necessarily users of our product, then I find that they might just casually mention, okay, this is how AWS does it or something like that. And then that's useful. And so sometimes I don't want to, we can jump ahead, but one of the things that we end up doing in case we maybe can't directly access a competitor product is strategically recruit users of competitor products and then really drill into their experience of using the competitor products and asking them questions about that if we for some reason can't access the competitor product.

Erin - 00:08:13: Yeah, and that could be really challenging with big expensive enterprise products.

Rachel - 00:08:17: Yeah.

JH - 00:08:18: Yeah, yeah, usually not like a free trial. You're signing up for it. You're going through and playing. All these different levels you kind of mentioned, from the profiles to the benchmark and everything else. How do you think about dividing your time across those or allocating resources and how often are you doing them? Are some annually, some you do every quarter? What's the lay of the land there?

Rachel - 00:08:40: I guess my current role is really like refining a certain method that then other researchers can start using. And so what I've been doing personally has been kind of jumping around on different products within our portfolio of IBM Software. So I have had to make myself be very laser focused on just the benchmarking aspect of it. I think when you're a product researcher, you want to do all of those levels at some granularity. But for the purposes of the amount of time I have, we already have a lot of internal resources that are on the competitor profile and analysis, usually like our sales and marketing team and even product management team do that type of research already. So usually what I do is I look at what they have, see the hypotheses that they've collected and see if there's something that maybe I need to, like a gap that I need to address. But usually I don't do that because it's kind of done and I have to focus on what I'm kind of charged with doing.

Erin - 00:09:39: So let's dig into benchmarking a bit more, because you're spending a lot of time there. And I imagine this is highly relevant and maybe new information for a lot of folks listening. So you mentioned that you're going to work with users to test the various products, and you're going to age against some predetermined metrics. How do you choose those metrics?

Rachel - 00:09:59: Yes. So part of it is determining what are the goals of even running this method. And then also, it's kind of interesting what we're doing, because we're kind of coming to product teams with this method because we're trying to create sort of a standard method for people to use. So we have a methodology set of goals, and then we have product-specific goals that we add to the actual research plan. And the first one we did, I felt like I added too many metrics, and it got kind of distracting because I wasn't able to. There's always that mixture of qualitative and quantitative and getting the right amount of insights. And the fact that I didn't have a super large sample size meant that the quantitative metrics weren't as valuable as some of the qualitative metrics. There's always sort of a fine line, and I felt like I missed on the qualitative side a little bit. I could have gone deeper. When determining the metrics, we were trying to figure out, OK, what aligns best with the business value? And a lot of the emphasis right now is on product-led growth. And with product-led growth, it's all about ease of use. And so what are ease of use metrics? And that's always a UX metrics. But we really like to look at time on task, and then task completion. How successfully are they able to do it? Do they need to have help? If they need help, then that's kind of a sign of failure. Are they able to complete it with help, without help? And those are some of the key ones. And then people who use competitor products. We asked them to compare their experience using the competitor product with using our product. And those are kind of our core metrics. And then if we need to add something, we will. But we're trying not to do too many and we're still kind of refining it. Yeah.

Erin - 00:11:43: Is that hard to quantify? Like, tell me about using their product versus our products. Is it like, tell me on a scale of one to 10 how good it is or bad, you know, are there ways? Do you try to quantify that or?

Rachel - 00:11:54: Yeah, it's a, it's a rating scale that you're from like, like much worse, much better and, and then they'll usually explain why they gave that score. It's always tricky too, cause there's a lot of psychology behind why people give an answer, but we do the best we can.

JH - 00:12:08: And is this happening on a recurring basis or is it more like an as needed basis? Are you trying to do a benchmark every three or six months just to keep it up to date or is it more of like we're doing something in this area and so this benchmark would be really timely so let's do it now type of thing.

Rachel - 00:12:22: Yeah, well, picking which products to start with for these studies are kind of the timely aspect, but then the intention is to do them over time, like a set amount of time, like maybe it's every Product Release or like when they've been able to make the changes, then let's run this again and see if they were actually effective. It is meant to be done over time.

Erin - 00:12:40: The nice thing too, I guess, if you're doing that benchmarking, for example, with the Product Release, is you're benchmarking both against the competitor and against yourself, right? Because a lot of companies I know will do, you know, UX benchmarking, at least for themselves. How are we doing over time? So how are we doing over time relative to ourselves, but also relative to the competition?

Rachel - 00:13:00: Yeah, because they're changing too.

Erin - 00:13:02: Right. They might be getting worse and you're flat. Well, that's better than they're getting better and you're getting worse, right? So it's all these relative possibilities.

JH - 00:13:10: Yeah, true. You know, something we hear a lot in recruitment when we talk to people about research is generally a bias towards not reusing the same participants a ton and trying to find fresh people to get new insights. Is that different at all within competitive benchmarking where it's like, maybe it's hard to find somebody who's a user of some niche thing or whatever, and you're more willing to go back to them? Or how do you think about the participant side of it?

Rachel - 00:13:29: I haven't run into the reusing participants yet personally, just because I'm jumping across products. I imagine that if I were still on a specific product doing these benchmarks over time, that might be a consideration just because it is. Yeah, it has been a challenge trying to find people using the tools that you want. That's just, you have to ask the right screener questions and everything, but this is definitely a use case that suits a third party recruitment platform like User Interviews really well. I'm really relying on User Interviews for these types of studies rather than our own user base because we're trying to recruit people that aren't IBM customers just for this type of study.

Erin - 00:14:09: So you're recruiting people who are not IBM customers. So are you asking them then, you're looking to understand their experience with competitors, but not relative to their experience with IBM?

Rachel - 00:14:19: It's the first time use, like we're trying to get. So, I was also sort of loosely referring to two methods of doing these competitive benchmarks because we have our ideal method, which is the head-to-head evaluation where we have people go through one product and then the IBM product. But then sometimes due to, like you said, like cost or legal restrictions or something like that, we have to do the alternative method, which is where we recruit people who use competitor products and we have to be really specific about which competitor products they use. With the first method, we don't have to be as specific about the competitors that they use. In fact, it's probably better if they haven't used either tool. So that's the first time I've used both. But for the alternative method, then we're like, okay, we want three people who use this product and three people who use this product and three people who use this product. And then that's when we're getting really focused on that. And it's like our backup plan.

JH - 00:15:17: All right, a quick Awkward interruption here. It's fun to talk about user research, but you know what's really fun? Is doing user research. And we want to help you with that.

Erin - 00:15:25: We want to help you so much that we have created a special place. It's called userinterviews.com/awkwardsurvey for you to get your first three participants free.

JH - 00:15:37: We all know we should be talking to users more, so we went ahead and removed as many barriers as possible. It's going to be easy, it's going to be quick, you're going to love it. So get over there and check it out.

Erin - 00:15:45: And then when you're done with that, go on over to your favorite podcasting app and leave us a review, please.

JH - 00:15:53: How do you think about avoiding some of the downsides that people always bring up when you fixate on competitors? I know... When we think about it within a product, the terminology I've often heard is like, you want to know where you have liabilities? Like your solution is just, much worse than maybe a competitive offering and that might be something you want to address. But then also you want to focus on how you create unique value relative to competitors and kind of win on your own merits and your differentiators. And sometimes, you know, fixing so much on a competitor can hurt your ability to think that way. Is there any stuff that you do on the research side to make sure that you're getting all of the value with fewer of the downsides or risks?

Rachel - 00:16:27: I saw that in the discussion guide and I was curious, like, what are the downsides of focusing on the competition too much? I guess, like, if you are over fixated on it and then, like, defining your roadmap based off of that, then that's difficult. And that's where, like, researchers really partner with product managers to define that roadmap. I guess maybe what I've seen is a little too much of, like, not focusing on the competition. And so that's why my role is created. So I don't know. I think it depends on the role. Like, I think within UX and design, we maybe haven't focused as much. And then there's, I guess there's always that executive level where they hear something about the competitor and then they're like, oh, we have to do this. But then they haven't looked at, like, how they're doing it or the way that they're doing it. So, yeah, I mean, this is just one method among many. We typically do a lot of other studies. This isn't a very, this is one we do too frequently or we don't do it all the time.

JH - 00:17:23: Yeah, no, what you're getting at actually makes sense in the sense of, right, like, this is a way to gather the data and have more information and have more signal. That in itself is not going to be a bad thing or detrimental. It's going to help you understand stuff, how you make use of it or the decisions you make off of it. Maybe that's more on the product side where it's like, oh, the competitors are all doing X and Y. So we should do X and Y when really you should be doing Z to like set yourself apart. So maybe how it's distilled or interpreted is the more impactful part, but the gathering and having the data is usually not a bad thing.

Rachel - 00:17:48: And that's where the user researchers are really valuable because we're always asking why. And so we push back on them and then we'll run and do a study to see who's right.

JH - 00:17:58: Yeah.

Erin - 00:17:59: Yeah, it does feel like a question of degree. It's like not should you do competitive research, but maybe like how much, right? Everyone has finite resources. So how much of your time does it make sense to spend on competitive versus other forms of research? And like when in the life cycle of a company should you be focusing on competitive research? Is that something you should be focusing on from day one or as you're more mature? I don't know if you have a perspective on that, Rachel.

Rachel - 00:18:24: Yeah, I do. I think there's different timings for competitive research. It depends on the type of competitive research you want to do as well. Like the type that I'm doing right now is so evaluative that it's definitely something you want to do once you actually have a product launched. I actually ran into that where I was asked to do a competitive benchmark on a product we haven't even released yet. And I'm like, well, I don't even have anything to test. And so that's where it's really appropriate to do more. It's just like going into generative versus evaluative. There's generative competitive research, then there's evaluative competitive research. And so generative competitive research could be recruiting users who use these competitive products and just having a structured interview with them about, what are you finding valuable? And then I mean, I've done that before, too, like doing kind of like looking at the analyst reports and doing the secondary research and finding strengths and weaknesses of other products. And you need to tailor your methods to the timing of your product.

Erin - 00:19:25: Yeah, it does feel like you're saying, JH, like what are the downsides of this? And I feel like, you know, the pendulum kind of in recent years swung very like, competitive research, competitive in anything, not being cool. And like, you should really just have your own point of view and focus on your positioning and your roadmap and so on and so forth. I wonder if maybe there's just a natural way every pendulum swings and that's just how life is. But with the current state of the market, right, it does feel like being market aware is a good thing, whether that be your competitors, the changing needs of your customers, just generally like macro what's happening. It does feel important to understand because there is so much change and disruption happening that, and I don't know how much this fits in with benchmarking, but your competitors are not only your direct competitors, but also. Do we even need this tool at all? You know, do we have the budget for this tool? And so just being aware of what the alternatives are does somehow feel maybe more urgent now than it would have in the past.

JH - 00:20:27: I think a lot of teams face, right, is with user research resources often kind of being a little constrained and, you know, how do you best deploy them? And this kind of has led to the whole democratization of research and PMs and designers leading research in cases, things like that. When you think of competitive research, is that something that's like... methodology that's pretty well suited for like a non-researcher to help take. So like a, you know, product manager or designer could do some of the competitive benchmarking or analysis and, and free up the user researcher to do maybe more generative or some more specialized methodologies. Or do you think it's actually a case where the skillset of knowing how to do competitive benchmarking really well is, you know, ideally everything is led by research, right? But if you're in a team that has some trade-offs and resource constraints, any thoughts on the ability for a non-researcher to do this type of stuff?

Rachel - 00:21:11: I think with the way that I'm at least the current project of developing this method of competitive benchmarking, it's kind of meant to be a method that people can pick up and use. And so I haven't done this, but I could see that in the future, maybe giving it to people who do research, especially since there's already sort of a structure to it. It's very structured. And when I've helped designers I've worked with do research, that's the type of research I give, I help them facilitate the kind of structure, like evaluative, like prototype testing or usability testing, concept testing, like that's usually what I would lean on them to do. So I could see at least this form of competitive research that I'm developing, like the benchmark, I think that could definitely be done by somebody if you train them.

Erin - 00:22:03: For these benchmarks, paint me a picture, you have like a deck, you have a grid with scores and competitors. Like what are some of the assets you're creating to show these benchmarks? And then how do you share them? And how do you update them and reshare them? Like how are people accessing this information?

Rachel - 00:22:22: Yeah, I'm a super big Airtable fan and I do everything in Airtable. I've written about it before, like a medium and everything. And I collect all of my data in Airtable. Like I do that atomic research, like having it all in like observations or nuggets, and then they go up into insights. And that has been very valuable because it helps the teams that I work with dig into the data, actually, instead of just being able to see the presentation I make at the deck I make. And I mean, I always do make a deck because I feel like I can tell a story better that way. But then really diving into the data, I use Airtable interfaces and let them kind of play around with the data. So I've made dashboards of like, oh, here are the participants and here are the breakdowns of them. Like here are their roles and these are the tools that they use to help sort of prove that I recruited the right people. And then I have an overall like Metrics dashboard, which is like the metrics from the overall study like that you ask at the end with the overall like ease of use questions. Like I use the UX, U-M Light, which is like a shorter version of the System Usability Scale. And so then there are those like overall metrics, the NPS score, because we love NPS at my company. Whether or not you agree with it being a good UX.

Erin - 00:23:39: That's another pendulum. That one's always going back and forth.

Rachel - 00:23:44: Because it's something that my stakeholders understand. It's good to have something for them to understand. And then I have another dashboard for task specific metrics, like the time on task and then the individual ratings. And it goes very in depth. And that's been just like very, very helpful for people to go in and be able to then take action upon the findings.

JH - 00:24:06: I love that. Yeah, I think there's something to the idea of like, if you're doing research, it's going to unearth a lot of this somewhat structured data. Compiling it in a format that lets people actually interact with it and kind of explore it seems much more useful and like rich than. Yeah, like I'm guessing that these cuts and these trends are the things people want to see. So I'm going to pull this out and put them in a slide deck. But giving the end user a way to play with it, that seems pretty cool. I'd imagine. Have you gotten any feedback? Like do the people you're sharing this with, are they fans of this?

Rachel - 00:24:33: Oh yeah, I was a little worried because I'm like, oh, this is too technical or something. But the product manager I worked with was like, I go back to this every day and I share it with my team. And that's like music to my ears.

JH - 00:24:45: That's really cool.

Erin - 00:24:46: Yeah, and just I was asking about the change of the benchmarks. Maybe this is hard to podcast them, like looking for a visual year. But how do you represent the changes? Do you have different columns for, I guess you're early in this movie. There hasn't been as much of the updating of the benchmarks.

Rachel - 00:25:00: Yeah. So, I mean, what I do is like you can map out time series. So you could make a bar chart or something. I don't know. And then like you can see if this is like the time on task the first time, then this is the changed time on task or hopefully it's lower. I just.

Erin - 00:25:13: Right. Right. Right. OK, so they're able to access that information quickly, too. It's always very satisfying just seeing the change in things, because you start from you're like, is this good? Is this bad? I don't know. Like, but seeing the change over time is very satisfying.

JH - 00:25:27: How do you think about the number of participants you want for this type of stuff? I know that's always a kind of a hot topic in research is, you know, is five people enough and yada yada. But like when you're doing kind of like those trends and time on task and stuff, do you need a bigger sample or how do you make sure that you kind trust some of these data points?

Rachel - 00:25:43: Yeah, when I do the alternative method, which is where I recruit people who use competitor products, I have to be able to make sure I have enough people who use the competitor products. And so I've been targeting three because it's not one or two, but it's three, it's like a larger number and then maybe having nine or 12 participants total. Yeah, I'm still early in this and still sort of refining it. And so I'm trying to I'm thinking to have five and five for the ones where it's like we're the head to head one where it's okay, you're going through our product and the competitor product and we'd have them go through both in the same one and counterbalance which one they start with. Like they'd start with an IBM product and then the other person would start with the competitor product. That might end up being too much. But that's, I think, probably around what we start with. We've been having some legal challenges with doing this, but the next project has been approved to do it this way. So it'll be exciting. Yeah, cool.

JH - 00:26:38: Is this easier when you're dealing with the end users of the software? So I'm thinking of just an example, like HR software tools, like we use a cool tool called Culture Amp for performance reviews and other stuff. Probably somewhat easy to go out and find people like a manager who's had experience writing a review and that and they could do some tasks with you and so forth. Probably much harder, but in some ways maybe more important to find the buyer of that software, like the head of HR who's going to implement it and set it up and they use it super differently than like the rest of the organization. Do you find that most of this stuff is more towards the end user of how they're using a B2B solution or do you also do stuff that's more with the buyer or the champion who's like doing it in a very different way?

Rachel - 00:27:17: The model that's really popular now and that we're going after is the product-led growth approach, which is different than the way that we've always done it, which is the buyer, which is really heavily reliant on the sales team. My focus is going forward. We're all really focused on this product-led growth now, and so it's definitely more of a focus on the end user.

JH - 00:27:37: Yeah, and it seems like that makes a lot of these methodologies and insights probably much easier to gather because it's actually like the usability of the software, not the, you know, one-time implementation thing that you had to go through or something, right?

Erin - 00:27:48: You mentioned a couple of times just like legal challenges. I'm curious how, you know, what you can talk about what legal approves in terms of how you've navigated those or any advice you might have for folks to kind of do this kind of research in a way that is legally sound, which we always want to be. We like to say at User Interviews, we support any research that is safe and legal. So yeah, let's keep it legal.

Rachel - 00:28:10: Yes, yes. I've learned a lot about our corporate legal guidance on how to do a lot of discussions about this. And basically the two big takeaways I would share with anybody is that all products have master service agreements or master license agreements or even terms of use. And you also might want to watch out for if their trial has a separate terms of use, because that was something I ran into when I checked the legal agreement for the overall tool. But then I signed up for the trial and then found out the trial had something about not accessing it for competitor use. So you have to check these things. And so what I usually do is I do kind of like a Command+F or Control+F if you're on Windows, to find some keywords and the keywords I typically look for, I usually type in COMP, which are the first four letters of competitor. And then you might see a prohibition against competitive benchmarking or competitor access or something like that. I'll type in that. Sometimes I'll type in benchmarking too, because a lot of that's kind of the common term that people use, especially from developer access, they'll want to do performance benchmarking and that kind of thing. So there's a few kinds of keywords to look for and just check, okay, are they mentioning anything about it? If so, okay, go back to your legal team and assess risk. And then the other thing is be transparent about who you are. So sign up with your work email and not pretend that you're actually a prospective user. Just sign up with your work email and access it in a not sneaky way. Yeah.

Erin - 00:29:38: So that you can be recruited when they recognize you. You would think everyone would just include those competitive lines in their terms of service if... I don't know. Why not?

JH - 00:29:50: It makes me wonder. It just feels so different in a physical product versus software product world. If I wanted to go into spatula manufacturing and I went out and bought a bunch of spatulas to learn what I like to do. I'm 100% buying these to use them for competitive research. It seems preposterous to have that disclaimer on that, but I guess you're buying it. Maybe it's different if you're spending money on these tools. I'm sure you can use it every month.

Rachel - 00:30:10: Well, and then there's some things where, so some people have told me, well, okay, you can't access it, but maybe you recruit a participant and then they access it, but then you're actually putting them in potential legal sticky situations too, because they'll say, you know, you can't even view it via someone, like specifically hoarding, I forgot exactly what it is, but that's something to be careful too, because you don't want to put your participants in a bad situation.

JH - 00:30:34: You have the participants recruit another person, you get enough shells and layers here.

Erin - 00:30:37: Yeah, just get your kids to do it. That's clearly the solution here.

JH - 00:30:41: No, it's a good point though. It is, I think your point about just being genuine or not trying to sneak around and doing some basic checks, that does feel like it goes a long way to covering the bases and being ethical here.

Rachel - 00:30:52: Yeah, and you might find that a lot of companies in this whole product-led growth mindset have already put demos of their tools on YouTube and things like that. So there's indirect ways to access as well.

JH - 00:31:03: You can't view that YouTube video for competitive reasons.

Erin - 00:31:05: Yeah, they might give you the shiny version. Yeah, that's true. Tripping over task completion in the demo videos. Blooper reel. Cool, well zooming out just a little bit, other advice that you would give folks that are trying to do some competitive benchmarking that maybe we haven't covered yet.

Rachel - 00:31:22: Benchmarking is specifically like what we try to do is identify a core set of tasks that users would go through on both products. There's some reason why you chose this competitor and that specific product you're working on. And so theoretically, they're trying to accomplish the same goal. And so that's just something, okay, like you try to have them go through the same tasks on both products and then you're comparing how they perform on those tasks. That's kind of really the core of what we're doing. Right, right.

Erin - 00:31:48: And you work with, I guess, the PMs to figure out what those tasks are.

Rachel - 00:31:52: PM development, yeah, whoever's, everyone has opinions and yeah, developers are really helpful there too.

JH - 00:31:59: How do you prime or warm up participants to have enough context in like... whatever fake scenario you're going to ask them to go accomplish tasks with it. You know, imagine, I'd imagine for like some B2B software, it's rather involved, right? Like pretend you're in this role doing this thing and you need to do this and now go through all these steps. Like any tips on how you make that like as realistic or as useful as possible so it doesn't feel contrived or fake?

Rachel - 00:32:23: It really depends on what the product is, of course, but that can be really helpful. Of course, partnering with your cross-functional stakeholders, especially they may have been working on this a lot longer than you have. And I found that sometimes support documentation can be kind of helpful for that too. We had one product that has a guided tutorial and the documentation about how to do that. So I just copied what the documentation had. They had created this whole scenario about how you work at Golden Bank and you're trying to do this. And so I just was like, oh, that sounds great. I will use that scenario. That sounds good. Let's pretend we're at Golden Bank and we're trying to replicate some credit score tables. And so I think that worked pretty well.

JH - 00:33:04: It's also probably a tip there too. I know when I've tried to access competitor tools before and haven't had success, looking through like release notes and support documentation and stuff, you can find a lot of screenshots, you can find a lot of details that, again, aren't the same as using the product firsthand, but there's probably something there too as a way to poke around.

Rachel - 00:33:20: Yeah, maybe forums, maybe people are talking about their use cases and you can come up with some ideas from that. Maybe ChatGPT, I don't know. Never know.

JH - 00:33:30: Yeah, maybe.

Erin - 00:33:31: Yeah, I'm also thinking about, you could imagine some like targeted applications of this, for example, let's say like we're seeing churning contraction with a particular segment. And we know that this segment really, you know, they're signing up for a product for this particular use case, let's dig into that one, right? And let's watch how people are accomplishing those kinds of tasks. So you could be a little more targeted with it.

JH - 00:33:55: Like, do you want to paint like a hyper realistic, like immersive scenario, or do you want to keep it kind of like bare bones? Like, do you want to be like, it's Tuesday and overcast and like your boss is like, you know, and like really get them like in character, so to speak, or is it more of like, hey, like you're a manager, you need to like pull this report, go see if you can do it. Like, do you have a... One of those seemed better than the other?

Rachel - 00:34:16: I'm going to make a guess here. A little more, yeah, like focused on the task itself. I'm sort of like, I'm role playing.

JH - 00:34:27: I'm just checking, making sure.

Erin - 00:34:29: They have a dress code they have to bring to the testing section. It's a murder mystery.

JH - 00:34:35: All right, well, it sounds like keep it simple.

Erin - 00:34:38: Awesome. Anything else folks listening should know? I've given a lot of great advice already.

Rachel - 00:34:43: I guess one thing that I always think is super valuable when doing any type of research is really involving the rest of your team. And so having the developers, designers, and product managers attend your sessions is super valuable. And that's where certain user research tools can be helpful with that too, because you can create that sort of one-way mirrors thing. Like with Lookback or UserZoom, they have those, like what do they call it, like the waiting observation rooms. So then they can, you can have a separate chat with them and then you can have a chat with your participants if you need to send your participant a link or something, but they can ask you questions that you can then ask, but then it's not disruptive to the participant. There's ethical questions about that too, like should the participant know they have a bunch of people observing them or like how do you disclaim that? But I also don't think it's necessarily right to have a Zoom or WebEx call where you've got 20 IBMers on. And then they're like, I'm being watched. It's just kind of uncomfortable. Like I don't want to offend somebody. And so I usually try to position myself as a third party person, like helping evaluate a tool. Like I kind of try to distance myself from the product as much as I can. Like I didn't work on this. Like I'm just evaluating this tool and then sort of having like the team on there is really valuable, but not having them all visibly there for the participant. Good. And so I like having them be able to ask me questions to ask them because I've had it before where somebody went off me and started arguing with the participant. That's like the worst. But having them there is so good because then they've already anticipating what you're telling them, like you're going to present to them. And then they're already thinking about ways to improve the product. It's like that Tomer Sharon, their own book, like it's our research. Like it really becomes our research, you know, like it's a collective effort.

JH - 00:36:32: Yeah, yeah, totally. And I'd imagine too for the engineers and the product folks, like, you know, while you're working on some of the big takeaways, like time on tasks and some of these other structured things. There's probably lots of little details they pick up on, oh, that icon is much clearer for this type of thing or whatever, right? And you know, might be able to take some small things away from it that maybe, you know, wouldn't be on the researcher's radar. And so I imagine that's super helpful.

Rachel - 00:36:52: Yeah, I mean, no matter how long I work at IBM, there's gonna be people who've worked at IBM longer than me, especially. There's people who've been there 20 plus years and it's like, and you've been working in this area 20 plus years, there's just gonna be something, like you're so much more knowledgeable than me. And like my expertise is in UX research and then their expertise is in that specific area. So it's really helpful to have them there for those purposes as well.

Erin - 00:37:16: What are you excited to do next with competitive research?

Rachel - 00:37:19: I'm excited to do more of these and really be able to refine this method and share it out. And then, I mean, otherwise I've just been having a lot of fun learning, keeping up with all the AI foundation model stuff. It's crazy.

Erin - 00:37:32: Yes, it is.

JH - 00:37:33: Yes, it is a wild landscape.

Erin - 00:37:36: Awesome. Thanks for joining us. It's been a lot of fun.

JH - 00:37:38: It's been great. Yeah.

Erin - 00:37:40: Hey there, it's me, Erin.

JH - 00:37:42: And me, JH.

Erin - 00:37:43: We are the hosts of Awkward Silences, and today we would love to hear from you, our listeners.

JH - 00:37:48: So we're running a quick survey to find out what you like about the show, which episodes you like best, which subjects you'd like to hear more about, which stuff you're sick of, and more just about you, the fans that have kept us on the air for the past four years.

Erin - 00:37:59: Filling out the survey is just going to take you a couple of minutes. And despite what we say about surveys almost always sucking, this one's going to be fantastic. So userinterviews.com/awkwardsurvey. And thanks so much for doing that.

JH - 00:38:12: Thanks for listening.

Erin - 00:38:14: Thanks for listening to Awkward Silences, brought to you by User Interviews.

JH - 00:38:19: Theme music by Fragile Gang.

Episode Video

Creators and Guests

Erin May
Host
Erin May
Senior VP of Marketing & Growth at User Interviews
John-Henry Forster
Host
John-Henry Forster
Former SVP of Product at User Interviews and long-time co-host (now at Skedda)
Rachel Miles
Guest
Rachel Miles
Rachel Miles, UX Research Lead at IBM, is a user experience researcher and strategist. A self-proclaimed nerd of all trades, she loves to learn about everything that crosses her path. In her spare time, you might catch her reading, drawing, traveling, or working on her blog where she talks about where technology meets wellness.