#11 - Integrating Research Faster with John Cutler
E11

#11 - Integrating Research Faster with John Cutler

Erin: We are here today with John Cutler, who is a product evangelist at Amplitude, and he's going to tell us all about what that means exactly. So today we're going to talk about just-in-time research. So do you plan your research out years in advance according to an idealized roadmap of everything that will inevitably happen? Or, do we think about just-in-time research or something in between? And what does that look like? So, that's what we're going to talk about today.
Erin: So, John, thanks for being here with us.
John: Awesome. I'm happy to be here.
Erin: And then of course, we have our other John like person, John Henry.
JH: I am also here. Thanks for joining and putting up with my harassment to get you on the podcast via Twitter, so glad that worked out.
John: It's the John likes, or we can both be John like on that.
JH: Exactly, perfect.
Erin: Fantastic. John, tell me a little bit about what is being a product evangelist mean? What do you do at Amplitude?
John: We actually struggled with the title, and it's kind of a working title at the moment. My actual role is helping up level teams, regardless of whether they're using Amplitude or not. It's kind of a sweet gig. And so what I'm evangelizing for is kind of best practices, or education, or continuous improvement for product development teams. And the evangelism part, I don't really need to even talk about our product, it's actually an analytics product, it just tends to come up because teams have problems related to measurement or making data informed decisions, so sometimes it goes there, but I don't need to talk about the product explicitly.
John: So I generally get to have fun talking to lots of teams. I think the last couple weeks, I've may be spoken to 60 teams. So it's kind of a dream come true in terms of having an amazing sample now. So for the first time, I think I'm coming into my own where I can say, "Hey, if you look out there, you tend to see this kind of cross section of practices." So it's kind of amazing to talk to that many teams.
JH: Yeah, that's awesome. I'm very jealous of a role like that. Have you seen teams in all these conversations? Is just-in-time research something people are already doing? Or, is that a pet idea that you'd like to see teams adopt more of?
John: It actually comes up a ton, and it's a spectrum in terms of what just in time means. So just-in-time for some people mean just-in-time to do our annual budget. And for other people just-in-time means, "Hey, we're kind of pulling in this effort in the next week or two, can you help set this up and jump in with the team?"
John: I'm kind of known ... I have this thing that I talk about a lot called Start Together, Work Together, Finish Together. And starting together, is a thing I'm really passionate about. And as a UX researcher, it was a highlight of my career launching in on these efforts together with the team versus getting too far upstream. And I always liken it in horror movie, you open up the door and all the people are looking at the problem together for the first time. That gives me such a rush individually working like that. But I think more practically as well, you see such amazing things happen when the team grapples with a problem at the same time.
John: Contrast that with a situation ... A couple weeks I dealt with the PM who sort of eight months have been mulling over and planning something and then kind of coordinating with UX research and coordinating with data science and building this business case. And it was never pulled in and they kept mulling it over. And by the time it 'got dropped on a team,' all the life had been taken out of it. There was no passion, there was no opening up the door at the same time. So kind of long roundabout way of saying, in these calls you hear a ton of discussions about how far upstream to get things. A cross section of those discussions are with the UX researchers and UX wondering about the specific problem for UX research. But, it's kind of a broader problem on teams about what does starting together actually mean.
Erin: Right. And I think ... Agile is the new ... how many letters? A-G-I-L-E, the five letter word, right? You hear about speed so much, but speed isn't really maybe the point so much as the synchronicity you're talking about, right? Where we are discovering and conversing and unfolding at the same time such that the conditions of what we're trying to do haven't changed by the time we're taking action on them.
John: Right. And that happens to every ... It's funny because I met with a UX research leader here in Santa Barbara, a company called Upfolio. They have a pretty established kind of UX research and design practice. I met with Lori there who heads up the UX research. She sees it as a portfolio of research efforts. Some of them are a little bit more forward looking. But interestingly, at that company, even the forward looking things tend to be small cross-functional little teams' kind of tackling it. And it's cross-functional in the sense that there's a founder of the company alongside a 22 year old first time designer, and a developer, and an experience researcher.
John: So even in that setting when people are getting upstream, there's still that level of like, "I don't know what the word synchronicity or serendipity ... " Or just sort of discovering the problem for the first time!
JH: I think cross team collaboration as part of this seems like it is a lot of it, right? I think when people throw shade at waterfall processes and stuff, you think of like the Gantt chart, and you think of all the different roles and things moving left to right in these different steps. But the reality is, you can't do everything all at once, like things happening in phases and things move left to right. But it seems like the failure of waterfall in those charts was that it was team one does step one, and then hands it to team two and step two. And if you actually have the right people involved in the beginning, and they move it through all those steps, it can work much better. Is that kind of what you're getting at?
John: Yeah. I wrote something recently about sprints in general. And here's the irony for me about design and sprints and the kind of agility. Designers have been working in innovative ways for a really, really long time. And the idea of frequent integration ... So think even like a design critique, like, "Okay, we create these things or ... " The book about Pixar Creativity Inc, I think they talk about their weeklies, and they're bringing together the story, and they're challenging each other, and they're kind of getting down to it ... these things are age old.
John: The essential trait of waterfall is infrequent integration of ideas, and perspectives, and learning, and technologies, and assumptions, and all those things. That's kind of the part and parcel element. And it's the idea that you go off and do your thing, the other person go off and do their thing, and then at the end, it will kind of all come together. People often cite like, "Oh, well, you can't build a building in an agile way. But actually, a building is like an Ikea piece of furniture, you actually are integrating. The structure wouldn't stand if the parts that you were building weren't somehow fitting together.
John: So it's kind of high concept, but I think that for design and UX and UX research stepping back and saying, it's about the integration of ideas, and that's what characterizes doing that with a team of people and integrating where you're at. It isn't about shipping code necessarily every two weeks. It's the spirit of integration that actually defines working in this iterative way. And I would posit that for designers, it's actually the way we'd love to work.
John: The problem is that ... I have a friend here in Santa Barbara, there's a company called Deckers and it's a shoe company. And when you design the shoes, it's like an 18 month lead time, and at a certain point, you need to design the shoes, and then it goes to mass production. The environment that we're in, in software development, done right, has amazing opportunities to iterate on things once they're out there.
John: I would say that if we take a step back and imagine what we're doing is kind of like continuous design versus design then build, and then we're doing it together with a cross-functional team, that opens up amazing possibilities for designers. I'm like bullish on this whole thing. I don't get agitated by it anymore.
Erin: I love this phrase of continuous integration of ideas. How have you seen this work well in teams? How do you ... And obviously this touches on the larger agile development topic. But how do you do this research just-in-time without it devolving into chaos? Continuous integration could easily become, "Continuous! wait, what are we doing?" So how does it work?
John: Well, I think that in this case, very specifically would be like the team creates a backlog of learning goals. And you don't want to spend three months with one learning in progress only to find out that you were answering the wrong question, So you imagine like a development team is there, and you have these learning goals, and you're like, "Well, these first two weeks, we really want to prioritize learning about this. What's the best things at our disposal to learn about it? Do we need to write code? Probably not. Do we need to maybe try to sell this or get this out? Or, do we need to talk to ... "
John: UX research brings out a whole kind of quiver of best practices of how to answer questions. But at the same time, the team is there too, and they're willing and able to participate. And so I think the way to prevent it from devolving into a free for all, is creating that cadence and being very deliberate about your learning goals, how you intend to learn, and then what does that experiment around how you're going to learn look like, and then how are you going to reflect on it in a short period of time.
JH: I'd love to get your perspective on the benefit of this when a team is doing it. It feels like there's two things. One is, the learning is fresher, right? So you're learning and then you're doing something with it right after. So it's not like sitting on a shelf for two months, because someone did it way upstream, and contexts have changed, or things have evolved to whatever, right? So that'd be one benefit.
JH: Whereas the other is kind of, because you're doing it that way, the team morale is almost like higher in the sense of, "We went out to learn this thing and now we're doing something with it." Whereas, if you're a UX researcher, and you go out and do this thing, and you learn something really interesting, and then nobody wants to do anything with it for two months, it's not a great experience. As a person, it's not a very fulfilling feedback loop that, you did this great work, and you learn this really interesting cool thing that should impact the product and then nobody wants ... Everyone's like, "Wait, we'll get to it later."
John: Oh, yeah, that's an amazing ... One thing I always thought of is learning in essence is really valuable inventory. And it sits there on the shop floor. It's only when it's kind of converted, or value is extracted out of it that it actually feels great for everyone involved. And I kind of think that, are you keeping ... I talk about planning inventory, changing inventory, organizational dead inventory, learning inventory, if all that stuff ... What tends to happen is if development is moving 'slow,' you see this wicked loop. They're moving slow, so people start making themselves busy with trying to plan all these things upstream.The organization is stressed because you're moving slow, so they keep asking for what the next thing is because they get stressed. And because it's slow, you also have time to like redo your priorities. So it's always changing.
John: Specifically to that thing, I think what happens with a lot of researchers that I've spoken to is, they're so used to their stuff not really providing value that they actually kind of regress back into what they can control, which is, "My research report was amazing. That's what I love to do, I love ... " Same thing with designers and developers. If someone's like ... if you're so used to the 'MVP' being shipped and your sense of craft being assaulted, you're going to regress ... regress isn't the word, but you're going to retract back into the area of craft you can control. Like, "This design is going to be amazing," or, "This framework that we're going to use for technology is going to be amazing."
John: So I definitely believe that you also get this weird thing where, in lieu of that sense of impact, if you can't get that sense of impact, people will retract in what they can't control, and you get a real kind of retrenchment into craft. I see that a lot in the design community. It's hard when I see it, because I know deep in their heart, they really want to see the impact of this stuff.
Erin: Yeah. So you talk about these hardcore user researchers, trained in the discipline, and how they might want to do more proactive research plans ahead and have the big picture. And at the same time, there's this need in many organizations to maybe speed up and be more just-in-time with how the research is getting done. Do you find that it's the trained researchers that have a harder time-
John: Oh, yeah.
Erin: ... doing past research versus a product manager or UX designer or-
John: Totally.
Erin: ... or somebody else?
John: I think that ... I was talking to someone the other day, and then he said, "Well, we've got three PhD researchers on our team and that's actually a liability." But one thing is that I think that as long as the team is very open and transparent about the risks and what they need ... Going back to this thing of involving the team. If you're stuck in the forest, and you don't know where you are, and you've got someone who's really good at climbing mountains, and you're kind of like, "Hey, dude, it would really help if you climb that mountain to take a look at what's going on." We're going to explore the river because we're really good at fishing and setting up camp, but could you climb the mountain?" That's completely fine, if a team does that.
John: The problem is that they need to integrate. "Hey, go up to the top of the mountain, and then try to come back down in two or three days to let us know where you're at." I don't think there's anything inherently wrong even with the big batch kind of things, and I think it's more like just synchronizing it with the team and making sure people understand the bets you're making around that research.
Erin: Right. because if the point where you were saying before, 'if we don't ship anything that's fine, as long as we're on the right road.' If the point is to be on the right road or to get on the right road, and user insight is a way to do that, how do you get the right backlog of learning questions and prioritize them right if you aren't on the right road yet to get on the right road? Because it does balance long term and short term research inevitably to get on that right road. How do your hardcore researchers and your UX and PM folks work together to make that happen.
John: I use a device that's ... maybe someone say it's kind of negative. But I just try to say, what do we want to make sure happens and what do we want to prevent happening? This is like an activity I do with teams. And it's just basically "What are we trying to optimize for here? We're designing our process, and what do we want to make sure happens, and what do we make sure doesn't happen, and what is the inherent tension in this problem?
John: And actually fine designers are pretty good at that, and people are pretty good at it. I really love the idea of people co-creating how they work, and I'm really passionate about that. I see that when you frame it that way as an experiment, how you're working, it opens things up. But I think how this relates to the more experienced designers is that I think when you just frame the question of, "What don't we want to see happen?" And maybe the experienced designer will say, "Well, I actually don't mind if I do three months worth of work and none of it gets used. We took a bet on it, and I actually have enough bandwidth now, and it's okay." We do have a whole research team, and some of our research is more just-in-time and other is sort of prospective big batch kind of research, and we need both if we're going to innovate.
John: Developer will be like, "Well, that's good, but we need you to be available, because we want to be able to talk to you." I'm like, Oh, okay, well, we need to come up with something where I'm available, and I'm able to do these things." "Okay. All right. That's my balance."
John: The next person will say, "We don't want to keep chasing the local maximum. There are insights with our customers that someone needs to spend a lot of deep time with them to figure out, even if we don't use them." "Oh, great, okay."
John: I think that the main thing is when you talk about it like that, you become much more transparent as opposed to what I see on teams, which is a very dogmatic battle. It's like, "You guys don't care about research at all, and we just do crap." And then the other team's like, "Well, you don't care about us, and you're not available, and you're in meetings all day, and you're not sharing anything that you're learning right now. We have no idea what you're working on." It's all very contentious, and I think if you just back away, there's no silver bullet, and you have to back up as a team and say, "Well, what are working agreements right now?"
Erin: I love that. I think-
John: I think it's hard.
Erin: ... It causes so much grief, right? Maybe we aren't on the same page, so let's start with that and understanding why we aren't in on the same page, but just what is happiness look like for this person or successor fulfillment and what does it not look like? As opposed to this person hates ... "Researcher doesn't respect my work," or "They're stuck on waterfall." or whatever assumptions might go into something that actually is not the case at all, but rather, we just need to get on the same page about what success is going to look like for us hopefully.
John: I see this in the design community and in some of the design ops community and I don't know how much it ... I haven't really followed research ops, so I don't really know too much about it, although-
Erin: It's a thing.
John: It's a thing.
Erin: I know.
John: I know, I saw the diagrams, is totally a thing. Someone asked me internally, what is product ops? I'm like, "Oh geez." I feel that in some of these communities, there's like a defensiveness. I think it actually isn't a good look. It comes off as being very antagonistic, because certain people are working in organizations that are very antagonistic, so its reactionary instead of visionary. And I don't like that.
JH: Yeah. I've a real pet peeve when people start doing like Venn diagrams of product versus UX and carving out specific tasks that one does, and the other doesn't do it, because it feels very territorial, versus you have different skills, I have different skills, why don't we just work together. Our situation will be different than some other team's situation. We don't need this gold stamp coming down from on high of, UX always does this and product always does this. It feels like it sets the wrong tone.
Erin: Of course, but there's a reason for it, though, right? I totally agree, but at the same time, who's going to do this kind of stuff? Or, who's going to do it today or tomorrow or next week? So that everything's getting done, and we aren't duplicating efforts. I think that's a good intention.
John: I think that's where the skills inventory. I think that we belittle expertise in this area. There's so much cross functional work and people ... There's all this sort of T shaped person thing and I get it. But I think that if you actually say ... A great example, I was at Zendesk. We had a problem that involved ... you needed someone who really knew a lot about just mobile UX, you needed someone who actually knew a lot about search usability, you needed somebody who knew a lot about the domain space for what's being searched. You need people who understood like, are experts in what you need to do in the back end to make that possible. You needed people who are experts in the agent actor, and the goals that they needed. You needed someone who is a real expert at workflows related to support.
John: The problems we're tackling in our companies are, you need lots of skills. I think that you also need the expertise of sharing that and spreading it and building support for it. So that's an expertise you need as well. I am an advocate for T shaped people in these environments because of the nature of the problem. If you get too many experts siloed away, that isn't that great too. But I don't think we talk enough about the areas, and expertise, and the hats that we need to wear to be good. A way to have a healthy discussion about that is good.
John: The other thing is, it's one thing to tell someone that you're an expert at something, and it's another thing to show them you're an expert at it. So when I was doing the UX research thing, and I facilitated a design studio, it was the best thing in the world when the designer would say, "Oh, my God, these people are so smart. They're actually pretty good at thinking about design. They brought so many interesting technical variations that I didn't thought of." And then the developer's saying, "Yeah, that's why that person is doing that." I'm always putting my dots on Lanka's drawing, all the time. I think that's very, very different than having like a battle of the silos' situation, showing versus telling.
JH: I agree with that. I agree that there is very different and unique expertise that's 100% needed, and the showing versus telling goes a long way. That all makes sense. When we ... Just a circle way back to the kind of earlier part of a conversation. We talked about the kind of consequence of doing research that doesn't get used, and it going stale, or people getting frustrated about that, or whatever.
JH: And then you got into, there are different types of research, obviously, right? There's this big picture, trying to find a new optimal point at a broader level, which is almost like roadmap research, really like bigger picture stuff. And then there's the more like, usability kind of in the weeds like, "We're working on this thing, and we need feedback to make it better."
JH: I think it's like probably important to separate those two out, because if you're doing the big picture type stuff and the discovery, you might learn stuff that you 'don't use' but you're using it in the sense of you're deciding not to prioritize something else and said, which is hugely valuable and really important, and so it is getting used. Whereas if you're doing more granular usability testing on the screen two months upstream, and then by the time the developers touches the screen is way different, that doesn't get used, and that is waste and loss. So there are probably some nuance there, right?
John: One tool I like to use is, what information would the team buy if they control the budget to frame the effort? And the big picture things of like, some company was trying to prioritize some stuff on the backlog. And I was like, "Would you spend a million dollars to get a piece of information now?" And they're like, "You know what? We would, because there's 15 things on the backlog that all relate to this and this other thing is here, and we don't know whether blank blank blank is the right thing." And this was ironic, because they couldn't get enough budget to hire more researchers. I was like, "Seriously, if Forrester could give you the $1 million report, you would buy it?" They're like, "Yeah." I was experiencing complete cognitive dissonance "Oh, that's super fascinating."
John: But back to that big picture thing, this relates to having crystal clear conversations about uncertainty and what you want to learn up and down sort of the tree of these efforts. And I think that is where what some might regard is this kind of upfront or upstream research, where it very well may be the exact right investment to make to learn that particular thing, provided that learning actually drives that decision.
John: The decision ... I always say that product development is about decision quality, and decision velocity. And so information, sense making, flow, focus, support, all those things, encourage high quality decisions, and fairly fast decisions, or appropriately fast decisions.
John: So if you can kind of visualize that, I think it means that you are in a world where you don't even need to call it like upfront research, it's just decision support. And I think decision support can be very valuable.
Erin: For sure. And it seems like the challenge with a lot of that is getting that longer time horizon upstream, insight into the right hands at the right time, because you do lose some of that when you're doing this just-in-time stuff, which is the nature of it, is that, the people who needs to know are going to know, and they're going to know fast, and they're going to do something with it. And that's obviously a completely different episode. But I think that's a lot of the challenge with that kind of research. Is, it exists maybe somewhere, or someone in some department did it, but where is it and how do I nuggetize it or access it and does it get through.?
John: Nuggetize, that's awesome.
Erin: Yeah.
John: That can be a button in like a product, like nuggetize.
Erin: Nugget.
John: One thing that you bring up I think that's really important here is the framing of the mission has a lot to do with this. What I mean by that is, if a team tackles a mission of like three to six months, a real mission, a medii problem, an opportunity that they're going to extract value out of, then a lot of the sign and coordination challenges or how far up front you get, and all these things, actually starts to make sense because you're kind of viewing work at a resolution at that level, if that makes sense.
John: I think that where you see problems is where you go from that level of work, but all the teams understand is very prescriptive levels below that. They think in terms of screens, they think in terms of this one little experience. So an example would be at a company, the mission would be make Account Reconciliation blazing fast for enterprise customers. That's a great mission. And the why, you could fill in the Why there, but the Why would be; error free reconciliation or leave them time for strategic thinking, whatever mission level you want.
John: The nice thing about that is it gives the researcher just the right kind of scope to tackle a medii problem, and the team the scope to tackle immediate problem, and then the need for the supporting research to kick that. That's the type of mission that you might spend two weeks just hands off of keyboards as a team tackling together. The minute that you're playing Tetris with teams with two week, four week, one week, that level of work, no one has time to breathe. I don't know if that made sense. But basically, when you're framing the work at the right resolution, it opens up wonderful opportunities for people to approach these things in a healthy way.
JH: I think it's the same as it happens on the micro scale in the sense of, you just need to set the right constraints. When you're trying to break down a problem in screens and you take this part, and you take this part, you're not trying to dictate the solution of how the code is going to be written in house and implement it, you're just trying to give people a focus. And in that case, it's pretty narrow because you're trying to help, get it done in a few days, and it's pretty specific.
JH: But what you just described on the mission level is like, you're just sending constraints at a much higher elevation, but you're still giving the team room to go play and be creative within those constraints. I think there's some similar tax there, just as a different scale.
John: One thing I really preach about is just coherence. I'll do exercises with teams, and they'll be like, "Yeah, there's just two levels of work." They're kind of like the epic and then stories, whatever technique they're using. And I'm like, "Are there really just that many levels?" And then I'll just talk about the company as a whole. I use this method called ones in threes, which is, I ask the team to just map 1-3 hour bets, all the way up to 1-3 decade bets for the company. 1-3 hours, 1-3 days, 1-3 weeks, 1-3 months. 1-3 quarters, 1-3 years, 1-3 decades.
John: And what's really fascinating is that there's this messy middle of work decomposition. And this is what drives ... When I was doing UX research, this is what drove me crazy, because it was very ill defined. There was high level stuff that the company believed in, and then there was the low level tactical stuff, and I couldn't like really understand what this middle layer work is.
John: I think that when you're doing this correctly ... There's old thing about simplicity is great, but we want to use the appropriate level sort of simplicity, and I think opting for more coherent work decomposition, actually if you can have that roadmap there as a researcher, you can pick your level more appropriately. And to your point, again, it's kind of all pretty fractal. It just helps you visualize where you need to do your work. And that's a kind of more product, the thing, but I think that it's something that as a researcher you can urge your team to move in that direction. Like, "Hey, it looks like we're over simplifying the breakdown of this work. Can you be more clear about this messy middle of the mission?"
Erin: To the messy middle?
John: To the messy middle? I don't know. Is that what we want to sell on the podcast? Yeah. Okay. We'll go there. Nobody is. One thing I learned as a UX researcher was, you occupy this really odd place. You talk to lots of people, you're kind of in the middle of the organization, but sometimes you don't really feel like you're on a team. And it's like, you're a canary in the coal mine because you kind of see the machinations of the business, the politics of the business, the bureaucracy, and you're expected to kind of navigate it and if there's a lack of vision, you feel it first. I think that's the same way for product management, but I think it's especially hard when you're a researcher, because if there's a problem, you'll see it, you'll know about it actually,
Erin: You're good to ask because you've been in a dedicated research role and have not been, and so that's good. What do you love about user research?
John: When I did UX research, what I really, really loved was taking the team along on the discovery adventure. I know there's variations of the role, but what I got into, the company that I did this at, that was my role. I talk to a lot of UX researchers, so I know that other versions of the hat exist, but that really was great. I love that. Because I loved being there as kind of like a guide and having great tips and tools and techniques available for the team, and then watching them learn, was exciting. There's other researchers who have very different things that that turn them off, that's what got me excited to come to work every day.
JH: Nice. I don't know how quick this will be, but I'll try to throw it in.
JH: The thing I think about a lot with like learning in general, it's the asymmetry of feedback you get, because you set out, and you're like, "Hey, there's this thing we want to learn more about." And you probably have some bias that you hope it's true or whatever. And so when you go out, and you learn that it's true, it's kind of like high fives and pats on the back and be like, "Cool, let's go do it." And that's great. But you don't ever really look back at it and critically evaluate, could we have learned it faster? Could we've done something that was lighter in terms of getting the same data. Whereas if you go out, and you learn that the idea that you were interested in is a bad one, there is some retrospective in digging into why that was the case. But it still is uncomfortable to talk about the fact that that wasn't a good idea. And so you don't learn how to learn, I guess is what I'm trying to say. Because there's not a great feedback loop there. Have you seen people find ways to get over that?
John: Well, I think there's ... There's this great book 'Thinking in Bets,' by Annie Duke, which is interesting. She's a poker player and talks about kind of decision quality and forming a group of people to inspect your decisions using a number of heuristics, like what did we know at the time? I don't see this a lot, but I wish I did. This idea of, "Hey, let's review the last 10 decisions we made around research." I see retrospectives is something that happens on the team level, and frankly, a lot of people don't even do anything about what they retrospect on, and they just kvetch, and they just sit there, and it just is what it is. They're just going through the motions.
John: I think that idea of looking at decision quality regardless of the outcome. Like, "Can we go and look at our decision to do this? Did we match the appropriate methods to the question? This is how I did this analysis." "Oh, interesting, that can be improved." I wish I saw that more. I don't think people do that enough. I think there's so much pressure on the day to day. I've had senior people say, "Why would we ever do that? That's just looking back, that's just dragging up old laundry." I think that in a functioning research practice, like if you were like Lori here in Santa Barbara who is was talking to you about thing, they've kind of gotten to that level where they do take that seriously. Like, "We're going to critique how we went about doing this and see if we could improve it."
JH: Yeah, totally. I think that makes sense. I think you see it in athletics. If you run a personal best or something, but you realize after the fact that your first split was your fastest, and your last split was your slowest, you're going to do some analysis to be like, "Oh, if I had evenly paced that, I probably could have gone even faster. You know what I mean? I think there are a lot of good examples elsewhere that we can continue to kind of like lean on and draw from.
John: Yeah, I'm thinking like 'Checklist Manifesto,' that's an interesting book. The problem with this kind of retrospective is I think people feel it's so heavy and so painful that they don't do it. And remember ... I was thinking like, if it hurts do it more often. But the question would be, what are six checklist questions you can just go through? Could you even start there in terms of talking about your prior decisions? And I think that that kind of opens up some options if you it at that level. It does not need to be a big heavyweight thing when you do it.
JH: Cool. Well, thanks for joining us. This is fun.
Erin: Thanks for listening to Awkward Silences brought to you by User Interviews.
JH: Theme Music by Fragile Gang.
Erin: Editing and sound production by Carrie Boyd.

Creators and Guests

Erin May
Host
Erin May
Senior VP of Marketing & Growth at User Interviews
John-Henry Forster
Host
John-Henry Forster
Former SVP of Product at User Interviews and long-time co-host (now at Skedda)
John Cutler
Guest
John Cutler
John is a product development nut who loves wrangling complex problems and answering the why with qual/quant data. Currently the Senior Director, Product Enablement at Toast, he previously led product education at Amplitude.