#100 - UXR Productivity Hacks with Michele Ronsen of Curiosity Tank
E107

#100 - UXR Productivity Hacks with Michele Ronsen of Curiosity Tank

[00:00:00] Michele Ronsen: Everybody learns from their questions too. Again, I see repeatable questions from observers, repeatable questions from stakeholders, repeatable questions from practitioners, and there's a gap here, so how can I address this efficiently and holistically? I iterate on them. Not always the first ones out of the gate are successful as they become to be later on.
[00:00:25] Erin May: This is Erin May.
[00:00:27] John Henry Forster: I'm John Henry Forster, and this is Awkward Silences.
[00:00:32] Erin: Silences.
[00:00:39] Erin: Hello everybody. And welcome back to awkward silences today. We're here with our first external repeat guest, Michelle Ronsen. So excited to have you here, you are the founder and CEO of Curiosity Tank, and really excited to talk about UXR Productivity Hacks today. So thanks for joining us and coming back.
[00:00:59] Michele: Awesome. Thanks for having me
[00:01:01] Erin: Got JH here too.
[00:01:01] JH: Yeah. This is ticking two boxes from me. I've always wanted to have somebody back on as a recurring guest. I thought that'd be fun. So I'm excited for doing that and I love hacks and tricks and stuff. So this is right my wheelhouse.
[00:01:13] Erin: Awesome. All right. Let's jump into it. First of all, what is a UXR productivity hack? I know there was a time, remember, Lifehacker was a big Web 2.0, 1.0, 1.5, everyone was like they're growth hackers and hacking. There was the age of the hacking and then there's a little bit of a backlash to that, but what is the UXR productivity hack and why is it a good thing?
[00:01:36] Michele: I interpret them as something that saves you time or brain power. A lot of repeatable things come up in our practice and there are repeatable solutions that we can implement to just increase our efficiencies, increase our productivity, and reduce that cognitive load. Lots of context switching in our industry, so things that really help us do our jobs more efficiently.
[00:02:01] Erin: Right, because you needed that cognitive capacity for other things is the idea, right?
[00:02:07] Michele: Right, right. Exactly.
[00:02:09] Erin: Don't weigh down repeated tasks.
[00:02:11] JH: Do you have a favorite one that comes to mind immediately when we talk about this topic?
[00:02:15] Michele: There are so many favorites. It's like asking me to pick my favorite child.
[00:02:19] Erin: No one's listening, who is it?
[00:02:22] Michele: I only have one, thank god. I think about them actually according to where we are in the research process. I have some productivity hacks for the planning stage, for the recruiting stage, for the conducting stage, for analysis and synthesis, presenting, retrospectives, note taking, things like that.
[00:02:42] Erin: That might actually be a good way to move through it. Should we go in a linear fashion or how do you want to run through?
[00:02:48] Michele: Sure. For planning, in the planning stage, I created something called the stakeholder kickoff sheet. My stakeholders, I'm typically asking them the same questions over and over again, so why not just have a checkoff list? I know which ones I need to ask and get some solid answers on first in order to determine if this even is an appropriate project for user research, if it's even answerable, if it's ethical, if it's practical, what kind of level of confidence they're looking for, who wants to know this information, how will the learnings be applied, when do they need to make some informed decisions, do they have time to implement on the decisions, and things like? Something like a stakeholder kickoff sheet is super helpful. There's no need to reinvent that wheel over and over.
[00:03:36] JH: Nice. It sounds like that almost does a couple of things for you. One is, it almost serves like a checklist, you have to fill out these sections, these are important things, you don't forget them, but then it also just takes the responsibility of having to think about what goes into this type of document because it's all set up. Is that how you think about it?
[00:03:51] Michele: Exactly, exactly. This particular document was created to support my students. Then I realized, "Well, wow, it really supports me in my practice too." You get the same questions over and over again from aspiring practitioners or practitioners who are newer in the field, or even existing practitioners that are looking to just become more efficient and more strategic in their work.
[00:04:14] Michele: Lots of these tools came out of need from other people, some of them came out of my own observation of repeating the same tasks over and over again. Another one would be in the recruiting and screening process. I have templates for solicitations to invite someone to participate and take a screener. I have templates for confirmations, "Great, you made it. Let's get you scheduled." Templates for, "Now, you're scheduled, what to prepare. See you in two hours. If you need to reschedule, if you want technical problems." Pun there. [chuckles] Who would you call the do what?" Things like that.
[00:04:51] Michele: These can be customized according to the name of the study, the name of the client, the time, the date, the name of the participant, and things like that, but no need to recreate the wheel every time. Again, looking for ways to reduce my cognitive load. These are repeatable solutions to just increased efficiency.
[00:05:09] JH: Can I ask a meta question on this before we really get into all the tips and tricks? I have a love-hate relationship with productivity and these things because I love them, I take to them, but then I also catch myself sometimes feeling like this is just a healthy form of procrastination. Instead of doing the thing, I'm making all the templates or whatever. How do you advise people to get this right where you want to see that it's a tedious thing that you do often because that's where you're going to get the best return from finding some of these hacks or these templates?
Or maybe an adage that I've heard in when you're making physical things, when you're trying to buy tools, people will tell you, "Buy cheap tools to start, and then the ones that break or aren't serving your needs are the ones you need to upgrade, but don't fixate on getting all the best tools on day one." How do you make sure people don't spend too much time here and are not doing research because they're setting up all their systems? Does that make sense?
[00:05:53] Michele: A lot of these things can fall into research ops. If you have a research ops organization, many on the planning recruiting side fall into their domain, and those things will be used with every study. The way I think about it is, what tasks am I repeating over and over again and what questions do I get asked over and over again? Then is there a way to create a streamlined approach for this?
[00:06:21] JH: Correct. Yes. Make sure you have the evidence that this thing is wasting a lot of my time.
[00:06:25] Erin: Yes, and that by definition is not going to happen in your first 30 days on the job. You don't know what you're going to get asked over and over again, so you can figure some of that as you go. The other thing I imagine is useful here is you're talking about a specific template to kick off your planning process, but then obviously, there are many more kinds of templates you might use throughout your research.
[00:06:47] Erin: We'll link to some of your resources and some of your templates in the notes from this episode, but often starting with someone's already done the work for you, start with somebody else's template, and then of course you can make it your own over time like what makes sense for the kinds of problems you're facing, or that you're getting asked, or "I wish I'd known this at the beginning of this study, that would've saved me a lot of time," that might vary from organizations to organizations. That's really important too.
[00:07:11] Michele: Definitely. In my corporate workshops, we work to customize all of the templates to meet their needs. For example, there's a final presentation checklist, how to prepare for your final presentation, what to share, who to share it with, the length of time you have to present, the type of deliverables, and it goes into archiving, but are you archiving in condense or are you archiving in Dovetail, or are you archiving in Dropbox? How are you archiving? How are you naming your project? Check your file nomenclature.
[00:07:39] Michele: Which final artifacts to upload and where? Who to tag to let them know that it's complete? Share the completed link to the folder in either a Teams channel or Slack channel or-- we customize that. Again, it takes out a lot of the administrative thinking so that we can really save that energy for much more of the heavy stuff.
[00:08:03] Erin: Anything else in the planning stage that's important to point out in terms of productivity hacks or should we move on to the next stage?
[00:08:10] Michele: I have a planned template. Each section of the planned template explains what it's used for, why it's important, and examples of how it might be completed. For instance, it's the background to the three sentences overview of why are we doing this research, what are we looking to learn, and how will the learnings be applied? And then examples, and then research goals or objectives examples.
[00:08:34] Michele: I always try to lead by example in terms of, what does this mean and how does this work? Here are some prompts to help you think about that. In some cases, here are some prompts to help you have this conversation with your stakeholder so that they understand why it's important too.
[00:08:51] JH: As you get into like we've done the templates, we've done the planning, we've kicked it off, where do you find the next batch of optimizations come in?
[00:09:00] Michele: Recruiting. In terms of your recruiting criteria, also identifying your must-have criteria versus your nice-to-have criteria versus I'm willing to loosen these criteria when the recruit gets really tough. Identifying which criteria you can lose it upfront will save you that time when you're in that really stressful place of, "I can't find anyone," or "I can't find enough people." I don't have to go back to my stakeholder's schedule or another session and say, "Look, it's not working," or "We're really trying to find a needle in a haystack here." I like to have those conversations upfront too. Then overschedule more than you need.
[00:09:35] JH: Can I play that one back just to make sure I'm getting it because I think that's a good one. It's not that you can templatize the criteria because every recruit and project is going to be different. It is that when you're agreeing on the criteria, get people to include a little bit more specificity so that if you need flex later, you save the time of having to go back to those people. Is that the lesson learned there?
[00:09:51] Michele: Exactly. It's also educating our stakeholders that this might not work as planned. I understand this is our goal but we don't work in a black and white universe. Let's plan for the gray space.
[00:10:03] Erin: Got it. Then you mentioned also overscheduling, how does that work?
[00:10:06] Michele: Right. If we're looking for five participants, let's schedule eight to accommodate for no-shows, late starts, people that slip through the screener. You can always cancel them if their first five show up and they're flying colors, but always plan for your backups because it's going to happen. Especially in COVID, we're seeing much higher no-show rates.
[00:10:27] Erin: I'm curious how practically, like how do we do this in a way that actually saves us time? We want five sessions and we invited eight, and then we needed one of the extras, but not all three. Now, we got to politely let the other two know or get the new one scheduled in the slot. How do you make overscheduling actually save you time instead of creating more overhead?
[00:10:49] Michele: Let's just say we scheduled eight people out of the first five, four show up, I need one extra. I've already got them in my schedule. My stakeholders already know when all eight sessions are going to take place, I've already pre-screened them. I'm going to, once I complete that number five, write a very nice email or someone will write a very nice email to the last two participants and say, "Thank you very much. We have gathered all the feedback we're looking for for this study. We'd like to keep you in mind for future studies. Would it be okay to reach out to you then?"
[00:11:24] Erin: Do you let people know upfront that their study might not happen?
[00:11:29] Michele: It depends. Depends on the type of recruit. It depends on the amount of the incentive. It depends on how closely these things are scheduled. If it's a day or two in advance, probably not so much. I try to have actually like a backup day, if it's an hour or two, we might pay them the incentive anyway, or a portion of the incentive for reserving the time.
[00:11:50] JH: Nice. Scheduling feels like a broad bucket. Are there other things that when you're trying to do with collaborators and other stakeholders that make it easier or save you time?
[00:11:58] Michele: Yes. I have a note-taking spreadsheet and in my note-taking spreadsheet, I have a tab for both my intro and then a tab for the questions that I'll ask, then those are all time stamps. I've also identified what are my must-have questions. If I'm running short on time, I know these are the three or five that I absolutely have to get to. But in addition to that, I have a whole tab for my stakeholders.
That tab includes who the participants are, their key but P1, P2, not their names when their schedule, and what their core screen or responses are so that they can easily make the connection between who we're speaking with and the responses. Are they a new user? Are they tenured user? Are they advanced user? According to the recruiting segmentation and then in there are also direct links to join. We also have like a special sheet that talks about how to be a good observer in a session and what to expect and what not to expect. Some note-taking tools and strategies and things like that.
[00:13:00] Erin: Yes. When you're dealing with a session that has, you've got a lot of stakeholders, like you've got the moderator, maybe there's an ops person organizing, there's a participant. Maybe you've got a note-taker. There could be potentially a lot of people involved in any one of these sessions. You are being productive, not just to save you time and like the session you're moderating, but also answering all the questions and wrangling all these assets and information for everyone who might be involved. It's the information you need, where you need it, and getting that all dialed in ahead of time is going to save everybody a lot of time so it's compounds.
[00:13:33] Michele: Yes. Then everybody learns from their questions too. Again, I see repeatable questions from observers, repeatable questions from stakeholders, repeatable questions from practitioners. It's like there's a gap here. How can I address this efficiently and holistically? I iterate on them, not always the first ones out of the gate are as successful as they become to be later on.
[00:13:57] JH: You mentioned that somewhere in there about having like the core screening criteria available in that sheet for folks, is that another way of referring to like the must-have screening criteria or the most interesting? This is just a tangible, but I was curious what that means in how you think about it.
[00:14:12] Michele: Yes, core would be the basic criteria. We're looking for prospects not customers, we're looking for all US-based participants, 18 and older, mix of genders, mix of proficiency levels, but that also might include some exclusion criteria. We're not looking for people who work in organizations with more than 10,000 people. We're not looking for people. Then in my nice-to-have, for example, it would be nice to have people who believe X, Y, or Z, or have the attitudes about one, two, three, or don't, whatever that might be. My relaxed criteria might be, if we can't find enough in the US, we can't expand to Canada, the UK and Australia, something like that.
[00:15:00] JH: Cool. Anything you do with or recommend people do with the calendar events themselves for moderated sessions? There's so many things you can do in terms of the way you set up the reminders, or if you create one event for the participant and one for the stakeholders, or put them all on the same one, or have the event be public or private or naming conventions. I don't know if you found anything in that world that is useful and saves time?
[00:15:21] Michele: All over the above. First, never schedule more than three or four sessions in a day. Again, the cognitive load on that is pretty heavy and your own moderator performance will decline just from pure fatigue. You won't be able to remember or decipher one from another after the second session. Anyway, always record your sessions. The half an hour in between will allow you to do your three issues in an own and get into your headspace of your facilitator groove, and also clear your head of anything that might have happened just before or might happen next.
I also say it's really helpful to do a pilot at least two days before your actual launch, your sessions and you're piloting for everything. You're piloting for your-- make sure your technology works, your links work, access outside of the organization, different device types, like your question set can fit within the timeframe allotted. Now your next question's probably going to be, who do you pilot with?
Let me get to your calendaring invitations. I do set up two separate calendars. I do have a naming convention for the participants, and that is shared in advance so that we're not sacrificing or determining-- what's the word I'm looking for? Opening up the door for any PII issues. The team, the internal stakeholders get one invitation and then the participant gets another invitation. It's only the facilitator and the participant who are on their invitation. Otherwise, I think it could be very intimidating.
[00:16:56] Erin: On the PI note, because I know this is top of mind for a lot of people these days. PII, we want to be careful about who has access to it and like a need-to-know basis. It's not an all-or-nothing thing. There are many different emails, phone numbers than social security numbers. You get into really heavy stuff. Any tips in terms of how you streamline, who actually can access what PII, and what stage of the research?
[00:17:26] Michele: Well, I'm creating that sheet again for the participant details, and I'm only including the screen or responses that are pertinent to the study. I'm not including age. The criteria was over 18 or between 18 and 40. If they fit that criteria, I'm going to take it out unless it's totally pertinent to the study. I've also removed all of their names. In the initial note-taking, it will say P1, Joe, P2, Maria, P3, and then I'm going to remove all of those individual names. Then depending upon what organization I'm working with because every organization has their own approach. I may or may not be uploading or archiving that information. What level of information we're actually archiving really differs.
[00:18:19] JH: Are there ways to make that less burdensome on yourself? I know in my experience redacting PII, and even once you know how to do it can sometimes just be time-consuming. Have you found ways to just make that simpler how you're dropping the names or things like that?
[00:18:33] Michele: I haven't, maybe that's a good idea for.
[00:18:36] JH: All right. One of the listeners will get back to us.
[00:18:39] Michele: This is an example though, where I'm not quite sure that's repeatable because every study might be different. Specific age might not be important in this study, but it might be really important in the next study. I'd have to think about that.
[00:18:56] Erin: Awesome. Let's see. You talked about archiving and you've done some research, you've got some backups in case people don't show up, you've got your note-taking grid, things like that. We've got to do something with these insights, with these raw assets and materials. Any tips on saving some time with some repeatable tasks on that side of the stage?
[00:18:56] Michele: Yes. Specific file naming conventions, super important. Per my client. I might come up with a file naming convention. Let's call it Awkward Silences or AS_discovery_Q1-2_concept testing. I know very quickly when I'm looking at the assets, all of them start that same way. Then maybe I have concept tests, a screener discussion guide, but having that really clear file naming system set up in advance will really help organize the files and make that archiving process much better.
I'd also say question banks. Developing question banks for screeners and question banks for discussion guides are really helpful. I've done that for a number of clients like in their platforms and ad hoc. This removes a load of asking that same question and figuring out how to phrase that for each line of business over and over again, or each maybe archetype.
Ad Break Insert
[00:20:29] JH: All right. A quick awkward interruption here. It's fun to talk about user research. What's really fun is doing user research, and we want to help you with that.
[00:20:38] Erin: We want to help you so much that we have created a special place. It's called userinterviews.com/awkward, for you to get your first three participants free.
[00:20:49] JH: We all know we should be talking to users more. We went ahead and removed as many barriers as possible. It's going to be easy. It's going to be quick. You're going to love it. Get over there and check it out.
[00:20:58] Erin: Then when you're done with that, go on over to your favorite podcasting app and leave us a review, please.
Ad Break Ends
[00:21:07] JH: The file one is interesting, because it feels like one of the first ones we've talked about where it's probably actually a little bit more work for you right now, maybe an extra minute or two, but it probably saves you like a half hour, an hour of work later when you're trying to find some important asset. Is that the dynamic on that one?
[00:21:21] Michele: Yes. Then I could just do a quick search on the umbrella name if I'm not looking for it or if I've used it consistently or my students have used it consistently or my team or whoever, I know what that file name should be because I created that nomenclature system. I could look up a-- I forget what we said, Awkward Silences, concept testing Q-Web [unintelligible 00:21:10] screener.
[00:21:45] JH: Got you. Cool. Are these all in a folder as well or where these files live? Is there another rapper for everything?
[00:21:52] Michele: Yes. Some clients have them on Google Drive. Some clients have them a Dropbox, some have them in SharePoint. I always try to acquiesce to whatever's going to be most culturally relevant for the teams.
[00:22:05] Erin: Have you solved the final, really final, almost final--
[00:22:09] JH: D3 final--
[00:22:10] Erin: -issue. Are these all editable documents where the version control isn't an issue?
[00:22:17] Michele: Yes, I definitely prefer the D-suite over the Microsoft suite for that reason alone. It's interesting. I think that's really cultural too, whether they save the versions, I think, final V2 final, but finally archiving in the end. Sometimes it's important to keep those iterations. Sometimes it's not, I'd be a millionaire if I could solve that one.
[00:22:41] JH: Broader question on something like this, is this something that if I'm trying to bring it to my team of like, "Hey, I've thought about it and a thing I've seen over the last quarter or two is we can never find our files or things from previous research. Here's a solution, let's use this convention, let's use this folder structure, whatever." Does it need to come in like that, or can you do it more bottoms up of, "I'm doing research right now and I want to try something new and on my project, I'm going to be really organized with these file names and everything else. Then I'll see if people want to adopt it." Because I'm just imagining some of these really straightforward, imagine how you'd maybe put them into your workflow of like, "Oh, okay.
I just made this template a couple of times. I'll break it off and share it with people." This one feels a little bit different. Do you try to do it as a team-wide thing? Can you start it as a one-off and see if people adopt it?
[00:23:22] Michele: I've seen it both ways. In my cohorts, I do it top-down and I pre-label all of their tools and templates for them. First name_CT_cohort name_name of thing, plan templates, creator templates, discussion guide template, what that thing is and then it's pre-named. I've also trained researchers who are implementing this just for their own workflow. Then research ops who implement it from the top down. I say start wherever you can.
[00:23:58] JH: Cool.
[00:23:58] Erin: I know one thing that you focused on is reducing fraudulent participants, which is an unfortunate reality that can happen, something I know we've spent a lot of time keeping off of our platform, but what are your tips for avoiding that unpleasant waste of time? We're talking about productivity. How do you save time on fraudulent participants?
[00:24:19] Michele: Gee, that's getting harder. I just had one last week, getting LinkedIn addresses and verifying those addresses through the LinkedIn platform is really helpful. Looking up for your spy senses. The participant that I was meeting with on Friday said he was in New York City.
It was 8:00 AM in California, 11:00 AM in New York, was pitch black out. He was like, "Hmm, do you have any windows in your room?" He had a very thick accent. I could tell he was not a native English speaker. One best practice is to reconfirm the must-have criteria and check for the same responses.
[00:25:01] JH: Oh, that's a good one. If you're in that call. You're in the first five minutes of the call getting some flags, you do this criteria check, clearly something's off. Will you just let the person go right then just to get the time back, or how do you unwind that situation?
[00:25:14] Michele: I tell pretty quickly. This participant also called in from a mobile device and he needed to participate from a desktop. When he was switching from mobile to desktop, I sent him a quick email and that email was open three times in about 10 seconds. That sends a little, there's a little anxiety there.
I would just say, "I'm sorry. I don't think that we're going to be able to continue this session today. There's some certain requirements to participate, screen sharing from desktop is one of them. Thank you very much for your time."
If I don't think they're fraudulent and they're not prepared, I might ask them to reschedule. I might walk them through the tech setup if those are the issues. For fraudulent participants, I want to let them go within the first five minutes. I don't want to waste my time. I don't want to waste their time. Checking that must-have criteria upfront is really important, and their surrounding [inaudible 00:25:37].
[00:26:12] Erin: Any tips on that tech setup side? I know that a couple of years ago, not every participant had Zoom downloaded yet, but now it's much more prevalent obviously, but any tips in terms of making sure your participants are prepared so that you don't have to reschedule and can get off on the right foot from the beginning?
[00:26:32] Michele: Clarifying what they should do before the session. I have a session coming up right at 11:00 AM, and these sessions need to take place on Chrome, via a desktop. Reminding them in advance, "Please join the Zoom from a desktop and have Chrome downloaded and ready to go."
[00:26:51] JH: Where would you advise someone to make the call on whether the session could be salvageable or not? You booked 45 minutes. You think you have 30, 35 minutes of really important questions to get through the first 5 or 10 minutes of getting lost to, I've got to switch to my desktop or this and that. At some point, do you let the person go then too? Or do you know like, we have a little buffer and I can still get through the core stuff. Let's make it work. What kind of judgment would you recommend there?
[00:27:16] Michele: I think it depends on the participant in the study. It depends on how hard the recruit is, how badly I need them. There's a lot of gray area.
[00:27:23] JH: It sounds maybe the rule of thumb is, the harder it is to find participants for the thing you're doing, maybe you have a little bit more wiggle or flex on some tech hiccups. But if you're bountiful with qualified participants, maybe be a little stricter and save your own time.
[00:27:36] Michele: Yes. That's a good real of thumb. Also make sure you have those 30 minutes of buffer time in between because that will allow you to accommodate for late starts and hiccups and things like that as well as debrief right after, but those 30 minutes of buffer time in between are really important [unintelligible 00:27:18] measures.
[00:27:53] Erin: We've been talking about recruiting participants a lot, obviously something we think about a lot. What about if you're trying to get ahead of things and build your own pool of participants, any tips there in terms of making that process efficient?
[00:28:10] Michele: Question banks are super, super helpful because we're generally posting similar questions. I'll be with some specific nuances, who blank. What does blank mean? What motivates you to blank? Where does blank come into play? When does blank occur? Why, where, how, are typical questions or typical probes By building out these question banks for clients, they're able to just fill in the blanks, so to speak because they're repeatable, they're scrubbed for biases, and preloading them into your platforms too, so that you can copy and paste them and then organizing them though.
The one that I just built also was organized by phase in the research process. We have a whole question bank for warmups. We have a whole question bank for digging deeper. We have a whole question bank for usability tasks. And scenarios already set up, whole question bank for wrap-ups and reflection type of questions. Then qual versus quant. You can look at this table of questions in a variety of different ways.
[00:29:18] JH: The idea is that you've had different people answer those questions over time. In the future, you go back to the pool like, "Oh, I need somebody who said their favorite color is red." I know I have from this question bank, a bunch of people have already answered this. Is that the next step here, or how do you pull into the recruiting or the participant side of things?
[00:29:35] Michele: I think it depends on how we're segmenting the participant pool, are we segmenting them in terms of tenure, proficiency, attitude? Then we would build in some specific questions. Keeping a pool hot though, takes a lot of work, and making sure that we have the do-not-call more than every 90 days or 120 days, is important too. In terms of segmenting specifically, I think that's going to depend on what the criteria is for one segment versus another in the participant pool.
[00:30:14] JH: Sure. That's fair. Panel nurturing is something that I'm actually going to be thinking about over the next couple months. If anyone listening to this has opinions, reach out to me. I'd love to chat you as an aside. A bit of a meta question again. I feel like one thing you always see in productivity stuff is people get really fixated on the efficiency side of things. How can I do this task faster? I think there's like a broader philosophy that is there's nothing worse than officially doing something that doesn't need to be done. Are there things that you recommend researchers or maybe spending time on that they could just like drop entirely, or how to find those things in their stack? I'd imagine over time, this process gets more and more involved and there's more and more stuff going on. How can people maybe reevaluate steps they could skip or that kind of stuff.
[00:30:52] Michele: I don't necessarily think about it as-- I think about these things as working smarter, not harder. Most of these things you're doing anyway, you're planning anyway, or you should be, you're recruiting anyway, you're screening anyway. They're not like make work, so to speak, it's more about let's do the work needed as efficiently as possible. What do I see that are time wasters? Not being able to find the files you want. Not documenting a change that was agreed upon. Not understanding which criteria could be loosened before you start the recruiting process. Not confirming the must-have questions and realizing 40 minutes into the session that your participant isn't qualified or fell into the wrong segment. Not building into your discretion guide like timestamps and must-have questions, and following your guide top to bottom, regardless of your start time.
Another one is timestamps. You always want to start your time stamp at 0:0, not at 0:30. If your session starts at 30 minutes after the hour, because that will just F you up. You'll always be trying to do the math in your head about 17 minutes in. Okay. That was point. That was call in 47. You just don't do that. Just always timestamp from 0:0.
[00:32:18] Erin: That's a good one. What about on the opposite side of things? In theory, what we're doing is, we're trying to save time, save cognitive energy for this other stuff we need it for. What is that stuff we need it for? Where should we not worry about efficiency? Or this is just going to take time. This needs your human attention every time. It's the first time, forget about hacks to spend your time and energy there. What are those things?
[00:32:48] Michele: Being present for your stakeholders, making sure your stakeholders really feel like you hear them and demonstrating you hear them. I think being present for your participants, there's no substitute for being 100% present. I made that joke, the 30 minutes will give you time to do your three issues in, and that kind of thing, but that's super important. You really want to be present. You have a lot going on when you're running a session. You have your participant and you want to make sure feel like a million bucks. You've got your guide over here and you're jumping around. Then maybe you have your stakeholders on a slack channel that are like "Dig deeper into that. How do they spell that? Find out where they got that, how long have they had that and life?"
There's the garbage truck behind me. I could get the call to pick up my kid at camp because of a COVID exposure. There's just a lot going on. I would say, save that time to be present, with those two core audiences who I think are my users. Always have two sets of users. People need to consume my research and people were designing for, and then throughout the analysis and synthesis stage and presenting stage, things like that. What do I need to be present for? What sort of a repeatable administrative task?
[00:34:05] JH: Yes, I think that that's a fair way of thinking about it because I feel like one thing we hear a lot from researchers or anybody, I guess in product development land is just often feeling a little bit of overwhelmed and there's too much to juggle. If you're able to get some time back in other areas, you don't need to immediately fill it with new stuff. You can use that space to breathe and do the rest of your job well.
[00:34:22] Michele: Yes, the context switching, I'm trying to minimize the context switching. It's a lot of context switching. If I can remove those challenges and just make it easier. I don't have to think about everything. Not everything really needs the same amount of thought, some things need.
[00:34:43] JH: What do you think these add up to in terms of time saved? Obviously this is going to be very imprecise and pseudoscience, but just if you had to guess like a scale of 1 to 100. You're doing it all from scratch the first time and it's 100, it takes you forever. What percentage of that time could you save when you really have this streamlined and dialed in like you have over the years?
[00:35:01] Michele: When I launched this, I'm going to ask you to track that. I want to bundle these and then launch it together. It's really hard to say because you're not going to get it right the first time out of the gate. Just start with something and then iterate. I would say that measurement, first of all, you need to establish a baseline, but that in and of itself would be a great study hours though.
[00:35:22] JH: Oh, yes. Just start by where are we spending our time in these research projects and where do we think we could compress maybe is a good starting point on?
[00:35:29] Michele: Yes, but definitely hours. If you think about it, you could belly gaze on a screener for a day or two, but if you had to screen our bank of questions, that would jumpstart your process significantly.
[00:35:43] Erin: Yes. I think where are we spending the most time and where are we the most miserable? Because it's not just the time you're spending, but where are you spending it? And it's draining your energy and your love of the craft or whatever it is. Those are also things you want to maybe try to find ways to not do or at least do quicker.
[00:36:02] Michele: I think the two biggest pain points certainly in research ops are at the beginning and the end of the process also. Again, I would look inside-- you just said that in a really nice way, and note-taking templates and frameworks are also significantly expedite your analysis and synthesis. When you're determining upfront what you're taking notes on and how you're taking those notes. Are you taking them according to heuristic and by participant and then cutting it by segment? Are you taking notes on direct quotes or data to triangulate and how are you marking? Those will really, really help too.
[00:36:37] JH: Last one. Do you think people should be thinking more about the duration and number of sessions as a way to get time back? I think a lot of people default around numbers, right? 30 minutes or 60 minutes, but maybe you should be doing 45 minutes sessions and you could be saving 50 minutes a pop, or maybe your team likes to talk to seven or eight people and you could actually be fine with five or six and that's like a few hours right there. Do anytime stuff there that people should be mindful of?
[00:36:58] Michele: When you see some really clear patterns and you're confident in the patterns you see. If you have a lot of other people's schedule, think really hard about whether that's a good use, not only of your time, but of your stakeholders' time and of those participants' time. Not all data is helpful. Not all, it's not the quantity of the data, it's the quality of the data. Maybe it would be helpful to just pause for two or three days. Think if you want to pivot and then, gain dig in deeper on something that you've already learned from the clear patterns or let them go, no one likes to waste time.
[00:37:38] Erin: Just imagining that lucky, last participant you're like, "I've heard enough. Done. No more participants." It all makes sense, but I think that's a great one. No need to stick to the plan. Whether the plan might be, sometimes you need to add more participants. That's not going to help us save time but it can cut both ways.
[00:37:56] Michele: Absolutely. The plans change and plans are what we do is iterative. Nothing about user research is static. It's so dynamic. Our roadmaps change, our political climate changes, we're having rolling blackouts and those change our schedules and our competition changes. Just to revisit that approach over and over and keep in constant contact with your stakeholders to find out, has anything changed? Do we want to rethink anything?
[00:38:29] Erin: Michele, thanks for joining us again. It's great to have you, and look forward to connecting another time sometimes as well.
Thanks for sharing all your great tips.
[00:38:38] JH: Yes. Good to have you back.
[00:38:39] Michele: Thanks for having me, guys. It's great to see you.
[00:38:44] Erin: Thanks for listening to Awkward Silences brought to you by User Interviews.
[00:38:49] JH: Theme music by Fragile Gang

Episode Video

Creators and Guests

Erin May
Host
Erin May
Senior VP of Marketing & Growth at User Interviews
John-Henry Forster
Host
John-Henry Forster
Former SVP of Product at User Interviews and long-time co-host (now at Skedda)
Michele Ronsen
Guest
Michele Ronsen
Michele Ronsen is the founder of Curiosity Tank, a consulting and education firm specializing in human-centered research, design development, and hands-on learning programs. Her clients include Slack, Zillow, Facebook, Microsoft, and others. Michele is also an Instructor, Content Creator, and Workshop Facilitator at General Assembly. Previously, she worked at Wells Fargo as the Vice President and Creative Director and the Senior Vice President and Creative Director and held two positions at the Bank of America.