#108- Perfecting the UX of UX testing with Nicholas Aramouni of Userlytics
E109

#108- Perfecting the UX of UX testing with Nicholas Aramouni of Userlytics

[00:00:00] Nick: I'm not anti unmoderated because I feel like it serves its purpose, but I'm certainly promod in the sense of if we're running a new, let's say, feature like a buyer with analysis, whatever the eye track, I want to be there experiencing what that participant is going through. And sometimes the participant, they are a researcher.
[00:00:18] Nick: What a beautiful thing. When you're testing your research with a researcher, that creates a beautiful environment for you to get the input that you need.
[00:00:29] Erin: This is Erin May.
[00:00:31] JH: I'm John-Henry Forster, and this is Awkward
[00:00:35] Nick: Silence. It's just.
[00:00:43] Erin: Hello everybody and welcome back to Awkward Silences. Today we're here with Nicholas Ar, who is the senior UX researcher at Userlytics. Thank you for joining us.
[00:00:54] Nick: Thank you for having me. Looking forward to being here and chatting today for sure. Awesome.
[00:00:59] Erin: I got JH here too.
[00:01:00] JH: Yeah, I feel like on some past episodes we've talked about like the importance of testing the test and today we get to talk about the, the UX of UX testing.
[00:01:07] JH: So we have like another pithy saying in our quiver here. It's, It's good . Yeah.
[00:01:11] Erin: Yeah. We're gonna be talking about user experience of user experience testing. So, you know, sort of amazing. We haven't talked about this before, I guess, because we talk about both of these things all the time, but never together.
[00:01:23] Erin: So I think this will be really fun and a new spin on both of these things. So again, thanks for joining.
[00:01:29] Nick: Yeah, looking forward to it. It's one of those things that's like, there's so many avenues you can explore with it too, because it is so meta in a sense, but a lot of fun when you kind of dive into it.
[00:01:38] Nick: So this should be a good one, I'm sure.
[00:01:41] Erin: Well, let's start at the beginning. So how do you build a great. UX to test ux. How do you even like think about that ?
[00:01:52] Nick: I guess the funny thing with that question, like when I think about that too, is the idea of what good UX is doesn't change. Right? And what I mean by that is the intention is to make something meaningful, relevant, effortless, regardless of the.
[00:02:07] Nick: That's the whole idea is we want people to feel that meaningful, relevant, effortless. When we step into UX testing though, the important part is it has to be flawless, and I say that specifically regardless if you're the researcher or somebody stepping in as a participant, the number one priority is the test, right?
[00:02:25] Nick: It's not everything outside of it that goes into programming or getting the invite. The the priority is the test, which means everything has. Maximize. So how do you do that? Making it flawless, but of course, making it logical, relevant, predictable. Great. That's sort of general step one. That's sort of how I think about it, that flawlessness.
[00:02:45] Nick: The next part is, I think the important part is simplicity to its absolute core, and I mean simplicity, intentionality together. The reason I say that we're in a time where if you talk about democratized. More and more people, , which is a hot topic, are getting into this industry. Maybe as, and I'm gonna put this in quotations, non-professional practitioners, which means there's an influx of participants xyz.
[00:03:12] Nick: We, we know what that means. We have to make the users and the testers feel. As if they can do this no matter what. Even if they're new, they need to know that they can enter into this with simplicity and just focus on what we just said, the test at hand. So intentionality is there important? Right. The intentionality, not just of asking specific questions, which I think is the general approach people think, how do you write a good screen or how do you make a good test?
[00:03:36] Nick: Making it intentional, great, but it's also being intentional in the people that are. Relevant people are able to create meaningful experiences, relevant experiences, effortless experiences to get that insight right. Yeah.
[00:03:50] JH: Yeah. Just to jump in, what's an example of like ways that people can make this like unnecessarily complex?
[00:03:54] JH: Cause I feel like maybe it's a little hard to get, to get like an example of like something that's really elegant and does this in a perfect way, but it feels maybe easier to point to. Here's an example of like really overworking it and making it much more hard to parse than it needs to be.
[00:04:06] Nick: Yeah, that's, And I think if we're speaking from a research.
[00:04:09] Nick: Usage perspective and how they approach, it's adding in too much. That's not relevant. And when I say not relevant, I don't mean you don't need to know it, but it's not directly tied to your research goal, your objective. And we can talk about that a bit later too, but you don't want to be asking questions outside.
[00:04:27] Nick: Of course, building rapport in a moderated session that are not tied directly to a research. Objective. You don't want to ask questions for the sake of asking questions, or if we're talking platform, you don't wanna make the participant jump through hoops to get into the test. Remember, the test is the asset.
[00:04:44] Nick: The before and after have already been worked, hopefully, well, fingers crossed. But again, that test is the asset. So making sure that the insight garnering, that insight becomes the priority, is how I would respond. So I guess
[00:04:57] Erin: there's a lot of vectors to this, right? So there's the UX for who? The UX for the researcher, the UX for the participant, and then you could probably go deeper, the UX for, you know, other people joining the research session on the researcher side for the consumers of the insights.
[00:05:13] Erin: And it goes on and on, but really, You know, you've got your researcher and you've got your participant, and then you've got the UX of, you know, whatever tools that you're using, and it might be more than one tool that you have to think about as well as the design of the test and how that design sort of fits together, you know, with whatever platforms or or tools you're using.
[00:05:32] Erin: You mentioned the number one thing is it just has to be flawless. It just has to be perfect, so no big deal. Talk a little bit more about that. What do you mean by, by?
[00:05:42] Nick: Yeah, and I'm glad you asked that. The way you do it is essentially, Becoming your own client. Testing your test, testing your platform. And I say, I say this because my favorite saying is, Test everything.
[00:05:53] Nick: Test early, test often. And really as an extension of that, what you're able to do is iterate and change to what people are demanding of your product and service. The things that we need as UX researchers in a remote sense, has changed significantly over the last three years, even after the last year and a half, no matter what capabilities those things.
[00:06:13] Nick: In need of iteration. But the most important way you do that is by mimicking what people experience in every single day life. Right? And we can dig into that when we talk about, you know, perhaps research side or participant side, but we don't want people to feel like they're doing something out of the ordinary, whether you're a new researcher, new tester, or experienced.
[00:06:32] Nick: On both ends.
[00:06:33] JH: It sounds like there's a piece here where like a big part of the, you know, the UX for the researchers is that they need to be really mindful of the UX for participants. So like in the way that they're setting things up for themselves, they really have to go through it as like what the participant's gonna experience and make sure that they've covered that and that it flows cleanly and the handoff and the way they're explaining things and stuff is, is that, is that a fair way to think?
[00:06:53] JH: A
[00:06:53] Nick: hundred percent. As a researcher, one of the primary tools that you have is empathy and putting yourself in someone else's shoes. And so when you're designing, especially again in remote sense too, when you're designing your test or designing what you need to be doing to get the insights you want, you need to have your participant in mind.
[00:07:11] Nick: And again, we mentioned that test is the priority. Making sure that that becomes what they're focusing on, and just getting the answers that you. Not more,
[00:07:20] JH: not less. Yeah. I like that. I think there is a tendency, I think we see this in a lot of things. You see it in product development too, but just like, you know, the kinda like while we have the hood up, like let's do all this other stuff and it's like, while we're talking to this person, let's jam in all these other questions or ancillary things.
[00:07:33] JH: So I think the idea of, yeah, being really strict on that and keeping it focused and in core to what's important to learn is, is a really good piece of advice on the participant side. It feels like that experience feels a little easier for me to like wrap my head around. They need clear, like prompts and communications about like what's next and what to do and, and right.
[00:07:52] JH: Expectation setting. You need to make sure that whatever tool you're putting them through, whether it's you know, a video call for moderated or some unmoderated testing tool for this is like gonna provide them the right guidance to s be able to successfully do the thing that you need them to do. Is, is that kind of like the main ingredients or when you think about the participant side, are there other things that you, you know, really wanna pay attention to and get?
[00:08:11] Nick: Yeah. Well I think it's an over, that's an overall general theme of the idea, right? Is taking into account that participants might be doing this for the first time, and this could be their first touchpoint ever being in a test. And even if it isn't, I promise you your participants are nervous, , like I can almost guarantee that they're like, Am I qualified to be here?
[00:08:32] Nick: What am I about to do? Do I have to share private information? Especially when you talk about testing internationally, you know what's allowed, what isn't. These things all play a factor and to sort of, I think, encapsulate that. I, I always relate it to making their participation from the first touchpoint, the best possible user experience they could have.
[00:08:52] Nick: And you do that. Translating it into what I call making an online reservation. Let me explain what I mean by that. When you go to book an online reservation, you click a link, you end up at a page, it asks you a few simple, direct, intentional questions, right? What's your name for how many people, which location?
[00:09:10] Nick: Great step one or is done. Step two is what time would you like to join the session? These are the times we have available. Great step. What's your email? Contact information? Email confirmation sent. That's what your screener should do. It should ask those intentional questions. Keep it simple, direct. How do they get in touch with you?
[00:09:29] Nick: And when that confirmation email is sent in this reservation, it gives them the information they need. That well informed part that you just mentioned, right? Um, how do you join the session? How can you be prepared? Do you have your audio working? You need headphones to do xyz. Here's how you cancel. Here's who you contact done.
[00:09:45] Nick: And of course, the prompts and alerts. There as well.
[00:09:48] Erin: When you think about the participant experience, who does the onus of creating a good one fall on? Is it the researcher or is it the tool, or is it.
[00:10:01] Nick: Both. It's the second one. It's both. If you're using a tool that by and large should be done to make life easier, really, hopefully you're using it because you're trying to make life easier.
[00:10:14] Nick: So the onus falls on the tool, and this is where we talk about things like what's a research tool and what isn't. , right. Is Zoom a research tool by extension? Mm-hmm. . Sure. But is it a UX testing platform? Maybe not. The onus falls on the design, of course, of the researcher to write intentional questions and be specific, but the tool and the gold standard researchers expect from these tools because they are the integrity of the industry needs to.
[00:10:42] Nick: First and foremost, again, that idea of the test is the asset. Let's not have participants need to log on and share their screen and remote click and type in links. No. The platform should provide, the prompts should provide where they need to go and give the instruction and keep it at that. It should be click drag, you know, click drop, copy paste, x y.
[00:11:04] Nick: Straightforward.
[00:11:06] Erin: Yeah. And I think, you know, so Userlytics is of course a testing platform and when you mention the Zoom example, right, it seems like for more of a purpose built testing platform, it's going to hopefully take you farther in making it for an easy participant experience than something that's not purpose felt well.
[00:11:25] Erin: How do you think about the right balance of, you know, sort of building a really opinionated UX for. Participants and researchers, that maybe makes it easier to make that a good experience, but maybe takes some of the control out of the hands, uh, by being less flexible. How do you think about getting that balance right?
[00:11:42] Erin: So that Right, it's more sort of turnkey and easy to use, but maybe not as customizable for, for the needs of the researcher.
[00:11:50] Nick: So if I was to like, give an example even of how you make something. Turn key, but also flexible in a sense too. Let's talk about how you would as a researcher set up a test. We know.
[00:12:04] Nick: For researchers setting up a test, Aaron, that this is an additional step in their process. They probably don't want to be doing this. They want to be getting to, to the real purpose, which is that insight. And I know I've beaten that horse, horse, you know a lot, but that's really what it is. How do you simplify the journey?
[00:12:21] Nick: Well, if a user, if a researcher comes on a platform, don't throw the kitchen sink at them. Please don't throw the keep it step by step. So you land on a. What's the name of your study? What kind of study do you wanna run? How many people, step one's done, What's the next important part? Participants. So instead of making a researcher go in and you know, copy link, send link xyz contact participants.
[00:12:43] Nick: If it's an all on one tool, you should lay out, let's say, all the key elements that you'd want to know about a persona you're targeting, age, gender, region. Edu, whatever those are. We all know what they are, but make that accessible. Don't make them search it up and click it up. On, off, on, off. This is what I wanna search for.
[00:13:01] Nick: Boom, participants are done. Now let's go into a situation where you're creating your test plug play. You know, have. The standard questions people ask, perhaps on the left hand side, drag drop rating questions already set up. You don't have to fill in all the information. Keep it intuitive. Rolling simple, and of course, customization's there.
[00:13:21] Nick: If you wanna change the words, you can, but don't force the work to be creating the test. Give them the option to change that if they want, but be predictive. What's relevant to your UX researcher? The questions know what those questions are and this industry standard way and plug and play from.
[00:13:37] JH: What about when you think about the experience, how do you factor in like the unexpected issues or glitches that kind of come up inevitably?
[00:13:43] JH: Right. So obviously the goal here is make it flawless, as you said. But you know, it is two humans trying to connect with some software in the middle. Things go sideways. Someone's kid is sick, someone needs a reschedule, whatever. Um, and I think a lot of the times, you know, people think of UX as like the screen or whatever, but it's much more holistic than that.
[00:13:58] JH: And like how you help a participant or researcher recover from an issue or, you know, something going off the rails a little bit is, is probably also very meaningful to how it's perceived and. As you called that, like these people are nervous, they wanna do a good job. Is that something you, you try to plan for up front or is it more of like, just be really empathetic and we'll help people as, as issues come up
[00:14:17] Nick: Yeah. You know, and I, when I used the word flawless originally, Jay, I knew it that was gonna kind of keep some mirrors. Like, okay, what the heck does that, How do you do that? It's not possible and it isn't, but what a great target to have, right? We can't control what tech's always going to do. There's bugs, there's glitches, Absolut.
[00:14:34] Nick: But what's a simple way that we can create reliability and stability, for instance, And I think we experienced it in some platforms we use now. In a sense it endeavor like unsung heroes of what this really means to keep things safe and almost flawless. An example of that is, you know, when you're going to log onto a test that's say, as a participant, if you've already done the right thing to inform them, tell them what they need to do, how they need to prepare, how they can reach out.
[00:15:00] Nick: You know, on that confirmation reservation email, if you need to change, just click here. Done, keep them informed and let them know it's, it's not that, it's not the end of the world. Just let us know. We're here to help. That's one way you do it another way, when you're actually getting into the test that I, I see all the time now with the platforms I use is that pre connection test.
[00:15:20] Nick: Which creates sort of this reliability, you know, is your connection good? You know, it runs that little diagnostic. Is your connection secure? Let's not forget safety here, data safety, but you know, is your audio working? Is it the right camera? Is it, you know, whatever These contexts are giving them that checklist saying, Hey, we've done this checklist for you.
[00:15:40] Nick: All you have to do now is step in. And same for the researcher. Hey, we've done the check for you. Your test is already ready to be launched. All you have to do is hit preview, submit, and focus. And again, I'm making it sound so simple and it's not that simple, but that's really what it should be. Again, the test asset, number one.
[00:15:58] Nick: Everything else outside of it is making people feel like what they're doing is meaningful, relevant, and effort.
[00:16:04] JH: Yeah, but I like what you're getting at is it sounds like, you know, it's like the simple but not easy thing. It's like there are some simple sort of like guiding principles or, or things to aim at that are easy to like remember and you know, maybe put on a checklist or something for yourself.
[00:16:16] JH: And then what you have to do is in any given test, it's gonna be a little different. And depending on the tools that you're doing, like figure out how to actually strive towards those. Right. So like you, is it is that like maybe a fair way of thinking about it is like what you're aiming at is pretty simple, How you pull it off on any test, a little bit more involved in.
[00:16:31] Nick: Never easy. Yeah. And I guess what the intention there is too is to let you know to always kind of start small as well as you get started on using platforms, but the idea of simple, not easy is definitely what it is. And I think we talk about even heuristic analysis, let's say in the, in the field of UX research, everything heuristics analysis tells you to do.
[00:16:53] Nick: It's simple. It's these 10 principles that you follow. Here are the 10 things you should do well. Now let's actually go do them. And that kind of alludes back to what I said before, Jay, is that test everything, test early, test Often if you're on the backend creating a platform, you need to be able to test all the things that you're providing clients, providing people, and ensuring that, again, it's iterative, it's trustworthy, it's it's purposeful, and what it's accomplish.
[00:17:20] Nick: Yeah,
[00:17:21] Erin: so when it comes to stability and reliability, there's, you know, some other, other things to talk about here too, right? So when you think of an all in one sort of platform, you're talking about obviously like some sort of way to connect researchers and participants, different test modalities. There's the analytics component, the insight sharing.
[00:17:41] Erin: So there's all these different points where things could. Fall apart. So how do you think about, you know, creating reliability across all of these different components of the software?
[00:17:53] Nick: I think I almost alluded to that in the sense before of like the IT endeavors are sort of unsung heroes here. But I also say that because research is very much a collaborative approach and.
[00:18:05] Nick: Where I'm going with this is it and Dev, when developing a, you know, a platform that's gonna give you that strong connection that we just talked about, or giving you that ability to pre-check tests and stuff like that, that's great. But what matters even more is the collaboration aspect between the researchers themselves using the platform and the teams developing these tools.
[00:18:25] Nick: Right, and I, and I know I've also alluded to that in terms of developing a good product or developing a purposeful product, but that collaboration effort is going to make sure that, you know, if you're auto generating metrics from rating questions that are programmed in, or if you're automatically storing data that's recorded onto a platform, you're doing it in a way that.
[00:18:44] Nick: People know it's safe by doing checks, talking to your customers, talking to your clients. Um, for instance, if you're dealing with legal issues in gdpr, you know where the information needs to be, you know where to store it. These sort of things make a massive difference even on that, on that first front.
[00:19:01] JH: All right, A quick awkward interruption here. It's fun to talk about user research, but you know what's really fun is doing user research and we wanna help you. We wanna help you so
[00:19:11] Erin: much that we have created a special place. It's called User interviews.com/awkward for you to get your first three participants
[00:19:20] JH: free.
[00:19:21] JH: We all know we should be talking to users more, so we've went ahead and removed as many berries as possible. It's gonna be easy. It's gonna be quick. You're gonna love it. So get over there and check it out.
[00:19:30] Erin: And then when you're done with that, go on over to your favorite podcasting app and leave us a
[00:19:36] JH: review.
[00:19:38] JH: A question maybe, um, from a different angle I have for you is as you try to find like the optimal, you know, great experience for researchers and participants, does that mean that you think researchers should be out there trying like lots of tools, like go out and find the best one? Or is it like stick with what works and really learn how to use it and, you know, be a little bit more conservative about, you know, jumping into, Cause I feel like this is always a struggle on the product side is like, you get a little bit of like shiny, shiny object syndrome of like, ooh, this one looks.
[00:20:05] JH: But there's advantages to, you know, keeping the thing that you have and you know how to use. Right. So,
[00:20:09] Nick: so if we're talking about like, researchers testing new platforms.
[00:20:13] JH: Yeah. They're like, I really want my experience to be great. And you have one researcher who's like, I'm gonna go out and look at all the tools and play with all of them.
[00:20:19] JH: And another one's like, I'm gonna, this thing works pretty well. We're just gonna optimize the hell out of it and make it great. Like, You know, do you see one of those paths tend to work better than the other,
[00:20:26] Nick: or, I am a researcher that believes that more that you can do on in one place, the better. You don't want to be running around with.
[00:20:35] Nick: Well, maybe you do. Maybe it's efficient for you, but for me personally, I think using a platform that does it all or using tools that does do as much as possible is much more valuable than trying to find one tool that does something really good and another tool that does something really good because you're invariably creating more disconnects between.
[00:20:54] Nick: What the insight that you're trying to gather and the fluidity of your test, make the user experience for yourself. Positive . Keep it all in one place and don't find yourself, you know, sprawling around trying to find this one, that one, this one. I think if you can master one platform or maybe even two, and keeping it really just tuned into what are the capabilities and does this one really suit everything I need to do?
[00:21:16] Nick: Well, I believe you're much better. You know, we talk about, I guess what I kind of relate this to is when we talk about doing like translation in the study, what that means is you're adding an extra step in where the insight comes from, right? You're having someone translate what someone's thinking to a different language, and then you're reporting on it.
[00:21:33] Nick: You're almost adding unnecessary nodes in the connection line here. I feel like doing that with a whole bunch of platforms is the exact same thing. You're running a risk of missing something or failure on one end, and it's, it's not cohesive from my perspective. I was gonna say,
[00:21:47] Erin: let's turn, let's turn this into a debate.
[00:21:49] Erin: This will be fun. , definitely pros and cons to, to all in one and, and multiple platforms, of course. And. But because you would prefer to work on all in one platforms and you do a lot of research for different areas of your own platform, I'm curious if you could just walk us through some of the testing methods that you kind of rely on to test these different areas of both the researcher, the participant experience, you know, different surface areas within the app and so on.
[00:22:16] Erin: What do you, do you have some go-tos or you
[00:22:19] Nick: Uh, yeah, I mean, I guess it's kind of hard for me on, on in a second to sort of say, what do we typically do to test a certain asset or a certain element of the type, But I'm always, I like to hear it from the person's mouth really. So my testing is always going to be, when I'm working on my platform, is I want to get on a call with somebody and I want to walk them through whatever that is in a flow.
[00:22:41] Nick: I'm very much like, what's this flow and how, how does it relate to the experience you're having? I'm not anti unmotivate. Because I feel like it, it serves its purpose, but I'm certainly promod in the sense of if we're running a new, let's say, feature like a buyer with analysis, whatever, the eye tracking, I want to be there experiencing what that researcher, or sorry, what that participant is going through.
[00:23:04] Nick: And sometimes the participant, well, most of the times they are a researcher. What a beautiful thing. When you're testing your research with a researcher, that creates a beautiful environment for you to get the, the input that you.
[00:23:17] Erin: Yeah. Are there challenges, You know, while we're on that topic to researching researchers, um, do they know too much?
[00:23:24] Erin: Do they, are they, do they make good participants? They make bad participants, challenging participants? Tell us about researching researchers. Yeah,
[00:23:33] Nick: I can appreciate that because I'm an optimist by nature, which maybe comes off in the way I speak and the things I talk about, but, Oh God, yeah, , it's, it's, there's some challenges there.
[00:23:42] Nick: Certainly, and you kind of said it in the general sense. Do they know too much and do they require too much? Right? Are they, do they think that everything that they need can be solved in one second? Typically, yes. Because I think I refer to this, they
[00:23:57] Erin: should know better. I'm sure all their insights have not turned into solutions immediately.
[00:24:02] Erin: and they're .
[00:24:04] Nick: People don't listen to researchers or designers. That's crazy. What kind of world are we living? Yeah, it, That's actually a good point too, is I think there's that need to wanna see, and, and that's in the industry though too, right? When you're a researcher, you want to see the impact that you're, you're making when you're testing out new things with researchers, they're saying, This needs to happen now because I run tests like this all the time and I always run into this issue, so this has to change.
[00:24:27] Nick: And you're kind of like, Ooh, that's not the way this works. Like, kind of like what you said, it doesn't happen this way. We're in prototype phase. So that's a good point that you made, Aaron, for sure. Mm.
[00:24:38] JH: I think I'm thinking a little bit about is, you know, we hear this from users, a fair amount is people are often stretched thin and, you know, limited bandwidth to, to put into the research compared to maybe what they would do in an optimal situation.
[00:24:50] JH: And so if we're saying like, you really need to test everything and, and optimize the experience, there's sort of like maybe three categories. There's like the, the pre-test stage where you're like getting the participant to sign up and setting the right expectations. There's the actual test and facilitation.
[00:25:04] JH: And then you have some sort of like wrap up conclusion stage, like where is the best roi? Like if I am, like I'm tight on time, I know I need to test it all and I want the whole experience to be great, but I'm limited. Like is it, you wanna do it in order? Cause it's kinda like a funnel. And so you wanna make sure step one is good before you get into step two.
[00:25:18] JH: Is it the test is everything, So like really focus there, you know, like how would you. If you had to 80 20 it, like what would you give somebody advice on how to go about doing that? I would
[00:25:26] Nick: actually see it as a cycle. I wouldn't see it as one as more important than the other. And what I mean by that too is could you possibly say that, you know, the re the participant's experience on the back end is less important than.
[00:25:38] Nick: You know, the test that they just went through, I don't think you could justify saying that, Hey, you know, it, it doesn't, It matters. Less everything matters. And that's the idea here is where I think where it all joins together is the point of where the test is. But it starts with the planning. The ROI comes with your test plan, and that's why I mentioned that intentionality, Jay, is when you're being intentional and specific about what you're trying to garner, whether it's the screener questions and you're starting broad and getting more tight, or it's the actual test portion.
[00:26:07] Nick: Only testing after building a rapport, Of course, the asset at hand, they all tie. They all tie in together. They need to be, One. Relies on the other, relies on the other, because what if this tester's a great tester too, right? And you maybe wanna follow up with them and, but if you gave them a poor experience or they felt like they were maybe out of WAC or insecure, you've almost missed that opportunity as.
[00:26:31] Nick: And so I think it all, it's all about the intentional planning for sure, but there is no one that's more important than the other, if I can be
[00:26:37] JH: honest. Okay. Yeah. Yeah. So if you were any time crunch, you do need to look at the whole journey and prioritize, like thinking about experience end to end. But maybe you can just find ways to more efficiently check on each step to make sure you don't have any like glaring mistakes or, or you know, experience gaps as you
[00:26:50] Nick: go through it.
[00:26:51] Nick: It's that start portion for sure. I think it is where that planning goes at the very beginning. That can yield you all the ROIs, but I think it ties back in at the end after a participant has engaged with you making it wholesome. So, and I know time crunches are like a pretty serious thing that we all experience, but by and large, I also think that.
[00:27:10] Nick: Knowing what you need to get done in a reasonable time is pri That's a, that's a whole new issue. I think that's comes to prioritizing and what needs to be tested and what doesn't, and not something that I would wanna speak on, you know, on a whim without diving into all the other sort of features that go into it.
[00:27:26] Nick: What are your
[00:27:27] Erin: top tips for researchers either using a new UX testing platform? Cause there are so many of them. Or even just sticking with an all in one that has new features, a new ui. You know, things are always changing, hopefully. Right. So what are your tips for, you know, researchers to make sure they're familiar with the UX in a way that they get what they need from it, including having a good experience for participants?
[00:27:52] Nick: I guess I'll answer this from the perspective of if you're going to use a single platform, which again, some people prefer to do and that's great as well, but I think the important thing that a researcher needs to do is ask the right questions and understand the capabilities of where they're going with their test and the platform that's at hand.
[00:28:10] Nick: And what I mean is not just read about it, but. You have a due diligence as a researcher, Well, your job is to probe . Why not probe into the platform and find out what those max capacity capabilities are? Inquire into the things that are coming in the roadmap. Maybe even elect yourself to be part of the person that's developing those tools.
[00:28:29] Nick: So sometimes what you find online isn't exactly what it does. You need to. Make sure that, you know, if it says you can conduct live moderated testing, great, but you know, perhaps, does that moderated testing allow you to have a backroom chat and annotations on the side or time tags, like explore that for yourself.
[00:28:46] Nick: If you don't need that, great, but the due diligence is on you to probe. Another idea here is. Ask for help. , I feel like researchers and, and I'm guilty of this too, is, Oh, I can figure this out myself. I'm a researcher. , let me just, I know how this tool works. And then you, you start playing around with it and wow, what a difference it would've made if you would've said, Hey, can I get on a call with someone and you can show me, you know, the capabilities or how this schwar streams my, my availability or my use of this product?
[00:29:14] Nick: That's a huge difference. And the last one, and I use this analogy often, Please don't just get in a Ferrari. And hit the gas pedal. If you've never driven a car before, maybe try on your friend's car , maybe try on an old car. You don't need to jump in and start huge Aaron. You don't have to go in and start a 400 participant study with a card sort tree testing.
[00:29:36] Nick: And again, you know, sentiment analysis. No, just . Start small and work yourself into the platform. Don't exhaust yourself in trying to figure it. That would be probably the three things I'd, I'd say. Do you
[00:29:48] JH: have, do you have any sort of, uh, like strong beliefs or, or sort of almost like how it takes here of this is something that most people are probably doing poorly in their experience.
[00:29:56] JH: Like your confirmation emails participants probably stinks and here's why. Or like, do, do you think there's parts of the experience that people tend to drop the ball on most that you see, like, uh, as you talk to researchers and maybe give.
[00:30:07] Nick: Yeah. And actually, you know what's funny about this question, Jay, is uh, I think I have to be self-reflective on this because it was something that I did wrong when I first started.
[00:30:14] Nick: That's right. Researchers make mistakes. People, researchers make mistakes. It's, I think it's the screener. And the reason why I say that is because I think the screener becomes, I mentioned this is the first touchpoint participants go through, and really why that matters is because, again, you want them, you want participants to feel like they're comfortable.
[00:30:33] Nick: That good UX experience starts with that specific thing, that intentionality point that we talk about. Getting the right people comes from that screener and making people feel like they can contribute. And if, again, if we're bringing in people, we wanna make sure that they are capable, but they also feel like they can.
[00:30:51] Nick: and I think with screeners, the top things that I see people almost forgetting is they put in questions from their test that maybe don't relate to what the screener is actually for. You're not trying to get. Answers, you're actually trying to stop the wrong. You're trying to build a wall of people coming.
[00:31:07] Nick: It's called a screener. So you don't need to be asking people these questions. What you that, that don't necessarily relate to who they are. You wanna be broad to start. Ask those qualification questions as early as possible. You don't want someone to get to end of your screener and get disqualified cuz they use.
[00:31:24] Nick: Less than four times a week. No. Like of course, that's a terrible example, but you get where I'm going with this. And that's where that sort of start broad, you know, be specific, be precise, and provide that positive reservation style at a restaurant experience comes in, I
[00:31:39] Erin: think. Do you ever participate in studies?
[00:31:41] Erin: Have you? Hopped that piece of the experience.
[00:31:44] Nick: Yeah, I have internally on products certainly have, but I feel like one of the, the really cool things about that is I get to see the back end of what our participants go through. Um, and a lot of that comes from wanting to meet, wanting me to be able to inform participants on what maybe they might experience or be able to solve problems ahead of time.
[00:32:03] Nick: So I've sat in on. A couple dozen as the researcher on the other side, which is a very strange situation to be in. Certainly odd, especially when you get
[00:32:13] Erin: leading question, leading question, ,
[00:32:15] Nick: Well, you start calling people out and actually, Right. That's what happens when you interview researchers, right?
[00:32:20] Nick: You're like, Well, I know if I'd it that. Yeah. Yeah.
[00:32:22] Erin: It's like,
[00:32:23] Nick: and I'm. That's like, are we switching suits here? It gets actually even funnier, Aaron, when, when you're in a focus group, , because you almost don't wanna over talk. Like I kind of feel like you try and predict what someone's going to ask you, or you're like, Oh, I bet you they're asking this because
[00:32:41] Erin: Right, What they're gonna do with this.
[00:32:43] Erin: Like, let me sneak in my feature requests here one way or another. .
[00:32:48] Nick: Totally. Which is horrible. But then you also don't wanna over talk over people because again, everyone else is a user too. It's this double, double edged sword that sort of presents.
[00:32:58] Erin: Nick, I always like to ask people what, like what sort of, what got you into the user research game and what do you love most about
[00:33:03] Nick: it?
[00:33:04] Nick: Yeah. I didn't start with UX research being on the top of my, my list. I was into music. I got into research, like music research policy, which is a real job. I promise you if you Google it, it's a real job. But what kind of made me fall in love with it was when I became a teacher afterwards, the empathy component that was required as a teacher to understand the.
[00:33:27] Nick: Students that walk into your room, whether of international backgrounds or diverse learning needs, wanting to understand how to connect with those people. That social dynamic engagement piece is kind of was like, Whoa, this. Super cool. Like I, I have to not only educate people, but I have to really connect with them one by one.
[00:33:44] Nick: And as I sort of was in my teaching career, um, I got an opportunity to do a UX research test on a one off sort of project. Um, somebody asked me if I would help them with it. My friend started a business that was involved, UX testing, jumped in and I was like, This is, this is it. Like this is where my, my passion.
[00:34:03] Nick: I smile when I do this. I smile when I'm running tests. It just, it's the need to connect to people and it's almost like a puzzle. I want to figure people. And this, this vocation allows me to do that. I can't even look at like products around my room without being like, Hmm, you know, how was that? How was that done?
[00:34:21] Nick: So it has to be that piece for me. Erin,
[00:34:23] Erin: any parting words of wisdom or thoughts on UX for UX testing? Wanna leave us with,
[00:34:29] Nick: I'm gonna say it one more time. Yeah, let's hear it. Test early . Test everything. Test often. Please make the world more user friendly for yourself, for everyone. That's the best advice I can give.
[00:34:40] Nick: All
[00:34:41] JH: right. Yeah, I think there's a lot of truth in that for sure. Thank you,
[00:34:43] Erin: Nick. Thanks for joining
[00:34:44] Nick: us. Thank you for having me. Appreciate it.
[00:34:49] Erin: Thanks for listening to Awkward Silences, brought to you by User Interviews.
[00:34:54] JH: Theme music by fragile gang.

Episode Video

Creators and Guests

Erin May
Host
Erin May
Senior VP of Marketing & Growth at User Interviews
John-Henry Forster
Host
John-Henry Forster
Former SVP of Product at User Interviews and long-time co-host (now at Skedda)
Nicholas Aramouni
Guest
Nicholas Aramouni
Nicholas is a Senior Communications Manager and UX Researcher at Userlytics who specializes in global UX practices. Nicholas has experience in various industries, including music, entertainment, media, and e-commerce. He is passionate about humanities, holds a B.Ed. in Social Studies from Mount Royal University, and was the former co-host of Mindspark. A learning podcast focused on K-12 education.