#115 - Optimizing In-Product Research with Ryan Glasgow of Sprig
E115

#115 - Optimizing In-Product Research with Ryan Glasgow of Sprig

Erin - 00:00:36: Hello everyone, everybody, and welcome back to Awkward Silences. Today, we're thrilled to have Ryan Glasgow here who is the founder and CEO of Sprig. Sprig, our friends of User Interviews. And we always love having our friends on the podcast. So, Ryan, thanks so much for joining us today.

Ryan - 00:00:52: Thanks for having me.

Erin - 00:00:54: Yeah, we're going to be talking about getting started with in-product research, which I know is something that a lot of people are thinking about right now. Whether PMs, product people, researchers, whoever and so lots to dig into. Got JH here, too.

JH - 00:01:06: Yeah, we've obviously recorded a lot of episodes now. Somewhat surprised we haven't gotten into this kind of topic before. So I think it'll be a good one to explore.

Erin - 00:01:12: Yeah, it's funny, when we started, we were like, how many episodes can we get through on UX research? And we just keep going and finding new topics. So that's been a lot of fun. Okay, so in product research, let's start from the beginning. Researchers are always focused on the why question. So why in-product research, why is it a good thing? Why does it matter? What are the benefits?

Ryan - 00:01:31: Yeah, a great starting point for today's discussion and my background is in product management. And really when I was at Weebly, as the first product manager, we were quickly growing to 50 million accounts. And we built out the product management team and the research team. And what I noticed is that our research team and also the product management team was spending a lot of time on prelaunch research, the general research, getting those mockups and designs in front of the teams, the customers, and we were actually getting really great insights there. But I noticed that when we would launch something, we often didn't really have the mechanism capabilities of tooling or techniques to really understand what happens to post-launch. And so when you really ship something out in the wild to all of your customers and radio silence, you don't get that feedback loop. You're not learning. You don't know how that was really perceived by those users in the wild when they're using their products that's really in the area of the market that I felt like the market was mostly unsatisfied. It was really an opportunity. And so when I started Sprig, at the time we were called UserLeap, and we were really focused on that specific problem of giving anyone who's doing research, product managers, researchers, designers, really the tooling, the capabilities to learn from their customers after they've launched something. And so in-product research is really about meeting your users where they're at and really hearing from them in a more contextual manner than maybe a traditional email survey. And so we're all used to getting those dreaded email surveys, maybe a Red Lobster gift card for $25 to fill out a 50 question email survey, but it's not really attached to the experience of when someone's using that product. And so in-product research is meeting your users where they're at as they're signing up for that product and completing an onboarding flow. You want to understand maybe that moment, maybe they use User Interviews for the first time and they really have their first conversation with a participant, that those are those key moments. And really building that bridge and connecting that gap between the people building the products and people using the products is really what in-product research is about.

JH - 00:03:44: How would you define in-product research? So you're kind of describing some intercept stuff here. Is that the crux of it? Is it more things like this A/B testing count? Does session replay stuff count? Or how do you kind of draw the boundaries or define that sort of concept?

Ryan - 00:03:57: When we look at research, I define it as really user sentiment. So user thoughts, feelings, emotions, and how someone's perceiving a particular product. And you can't really get that with A/B testing and you can't really get that with watching someone use your product. Just because someone rage clicks, it doesn't mean that they're necessarily unhappy with your product. But if you ask them, are you unhappy with the product? And they say yes, then to me, that's really research. And so with in-product research and how we perceive it and define it with Sprig is more focused on really measuring and understanding, both quantitatively and qualitatively, how users are thinking, feeling and perceiving the products that you're building.

Erin - 00:04:39: So we're talking about in-product research. It's in the name. The research is happening in the product. And a lot of times things happening in the product can get a little territorial or they can get a little just difficult to make things happen. Right? We're in products. We need engineers, engineer. Time is scarce. Who is in product research for and how do these different stakeholders who might want to do it go about making it actually happen?

Ryan - 00:05:04: We really base it on the company size. So some customers that are smaller businesses, maybe quickly growing, they don't have researchers yet. It's going to be the product managers, designers. But as these companies quickly grow, that's where the researchers are really going to step in, and they're going to be bringing Sprig into their research programs as a way to really complement the other types of research that they're running. And so Google and Facebook have been doing intercepts for ten years. Plus they're probably the pioneers in the field of intercepts and built all their homegrown tools, but at Sprig. We're really focused on bringing in product research to everyone else except for Facebook and Google that don't really have this in-house capabilities already built into their infrastructure. And so what we've been able to do is make it really simple. Where after embedding a small piece of code into your product anyone non-technical can really define and configure and set up in product research experiments with no engineering effort and with just a couple clicks, usually five or ten minutes, actually get one of these running and live in their product experiences, usually in a matter of one or two days. Really complete these experiments and get those research insights that they're looking for that ultimately answer those questions that they have.

Erin - 00:06:22: A little bit of quick tangent. I'm sure we'll get more in detail later. But you talk about the in-context and the intercept and immediately my mind goes to something I think a lot of marketers think about, which is the benefit here is clear, right? It's in context. You don't have to take them out of what they're doing to get this insight. How do you do it in a way that doesn't annoy them or get in their way or like get in the way of other goals you have which are presumably like to use the product right, to do something else?

Ryan - 00:06:49: Yeah, we get that question all the time as well. And something that we thought deeply about and the two main approaches that we've taken is that all of our templates, we have, our 75 templates, they're all really designed to be and we encourage all of our customers to really have in product research at the end of a flow or when someone abandons the flow. And so if you're in the middle of the sign up flow, you're in the middle of onboarding, you're watching a video, you're listening to a podcast, you're in the middle of your Uber ride. We really discourage sending these intercepts in the moment, but when someone completes that podcast, or maybe someone completes that user interview, or they complete that onboarding, that's really a great stopping point to pause and check in and say, how was that experience? Conversely, if someone starts the onboarding flow or maybe clicks out of the podcast or maybe exits the application, that's also a great opportunity to hear from them and say you've abandoned the flow. Help us understand what happened and why you abandoned it. Is it something with the product or maybe there's something that was outside of the product’s control that really prompted them to do that. And so all the research that we recommend with our customers and here at Sprig is non intrusive and really in moments where it's a good stopping point to check in with those customers and see how they really feel and think about what just happened.

JH - 00:08:15: Yeah, it feels like almost like when you're going to interrupt somebody in real life, there's like polite moments to do that. It can be a little bit more metal to it. It's like you can kind of almost draw from sort of that type of framework. In terms of like when you do interrupt and throw an intercept, are there best practices around, how many questions you should be asking them, like how long of an interruption it is? And I'd also imagine there's a qual vs quant here where you can get some structured responses, but you can also get some open ended stuff and how do you kind of see people taking advantage of those different capabilities and what do you recommend there?

Ryan - 00:08:47: And it's been helpful now just looking at all of the, across our entire platform, the number of responses that we collect, the number of surveys, we've done a lot of analysis of really the best practices. And what we have seen is probably about a 20% drop off for each question that you add. And so all of our templates, they're typically to the end user are going to be two or three questions. With skip logic, the template overall might be five or six questions, but for the end user usually two or three questions. And what we've seen to get the highest response rate is you always start with a quantitative closed question. And so how is the onboarding experience on a one to five scale? And it's really a way to open up that conversation, let someone answer an easier question, they click a button, they then just rate that experience. But then the second question might go into the qualitative to really ask why was that a great experience? Or why did you abandon that flow? Or why was that a negative experience? And we found that when you start with open edit, it's actually a much lower response rate. You know, if someone maybe has to think now, kind of a pulse problem, what do I say, how do they actually perceive that experience? But you can actually start in a more subtle way with a quantitative and then follow up with the open ended. And then from there you can decide you want to go back to open or closed or maybe ask them to reach out to you with more questions, maybe set up some time, one on one and really start that conversation.

Erin - 00:10:12: Sort of the format you see with like an NPS survey, right? Pick a number now, tell me more so you get the benefit if they drop off at the quality, at least you got some numerical information you can use later. Great. So there's so much that you can do with in-product research based on some of the benefits that you've talked about for folks getting started, right? How do they avoid sort of boiling the ocean and start with something useful and easy to maybe get off the ground first? What's your recommendation there?

Ryan - 00:10:43: What we always recommend and have seen it work best is really just pick a question, whether it's a product question, a business question, a research question, that your immediate team, your specific immediate counterparts, that product manager, and that squad of engineers or maybe that researcher, that specific question that she's looking to address, but something that's very immediate and has some urgency over the next four to six weeks. And the difference with in-product research is that you can actually collect the data and complete the study in that one to two day period if you have enough users, usually at most a week. And so you can actually look at questions that your peers or yourself are facing in the immediate term as opposed to some traditional research techniques like a diary study, you might need to actually look at a challenge or question that is many months into the future. And so I really recommend something that's really right in front of the team's nose and really looking at that question or a business challenge and then running an in product survey and a great way to start for inspiration. We do have over 75 templates written by our in-house research team using all the best practices with in-product research. And if you look through those, you can see things around usability, around product quality, around onboarding, sign up flows, marketing websites and just picking one of those. And we actually ran one on our website the other day and I just picked a template and clicked run and it was live. We're collecting data in a couple of minutes and it's a really great inspiration there to get started. But ultimately, like I said earlier, of going down and looking at the successful path. So people that do use that feature successfully, they do add that product to their store. But also, maybe the people that are unsuccessful, they start adding a product to their store, but they cancel or close out and running those in product surveys for both of those flows to hear, in both cases, the successful, happy path what worked well, but also people that clicked out and abandoned what didn't work well. One example that we have is Chipper Cash, which is one of the largest fintech companies in Africa. They launched a new crypto, a way to trade crypto, buy crypto and what they noticed that there was a very low adoption and they were considering using in product research. And we said hey, let's just ask people, are you aware that this feature exists? Do you know what crypto is? And what they found out is that they were able to increase adoption by nearly 200% by just figuring out the people that didn’t know the future existed. And so it was really an awareness challenge of saying let's actually figure out how to improve the product marketing visibility and awareness of this feature so people know it exists. And then that was able to then increase the adoption for that feature. Another great example of a very kind of immediate urgent question that they had in product research they were able to answer and address in just a few days.

Erin - 00:13:39: It sounds like in that case, the research was actually driving an optimization in itself, right. You both learned what the issue was and or maybe creating some awareness from asking the question. Right. So it doesn't always work that way, but it's nice when it does.

Ryan - 00:13:52: Exactly.

JH- 00:13:53: Yeah, I like the emphasis too on focusing on the speed component of what's a near term decision that you're wrestling with because this is a methodology that's going to allow you to get signal really quickly. So there's kind of like beyond just finding a use case, it's like you get the immediate loop too, of seeing the value right off the jump, which is great. The question I have next is sort of I'd imagine other people getting started with this might be familiar with other methodologies and things that come up there often, like how many people do I need to talk to? How do I screen them and make sure I'm talking to the right folks? How do you think about those concepts with in product research? Do you want to target certain segments of users? Do you want to get just as many responses as you can? Or do you want to limit it so you don't interrupt as many people? What sort of advice do you have there for people getting started with this?

Ryan - 00:14:35: Certainly, moving beyond some of the less targeted in-product research most of us are familiar with, like NPS. It shows up for everyone. It shows up globally. Next time you log into a product, you might see that NPS banner asking you to rate on a zero to ten scale. What we recommend, and I think where the world is moving, is hyper targeted in product research. And so I think it's really critical to have those targeting capabilities and say, I only want to target admins on our enterprise plan, who have maybe used our product for more than three months. And they just launched a study in our product for more than five times and really look at maybe a specific group and get very targeted about understanding the nuances of what that group is thinking or how they're perceiving that particular moment of using that product. And some of the ways that we think about really distributing the responses across the user base is a resurvey window. And so it could be set, you know, in the platform of seven days, 30 days, 60 days, some of our customers do 180 days or six months. And what this allows you to do is actually evenly distribute the survey collection across the user base so that even if you get really targeted and go to a specific group, but you keep going back to that group, you end up not really overserving a particular audience. And it is a little bit different than email surveys where you might send it out to 10,000 people or 5,000 people. And so many responses you get, with the in product research, it is able to run until you reach exactly the statsig. And so because the system is able to calculate of that admin users who are on the enterprise plan, who have launched more than five studies in the past 30 days, the system can actually calculate there are 2500 users that exist on a 30 day rolling period in that cohort. And for us to reach a 95% confidence interval, we're going to need 385 responses and the survey can then run until it reaches exactly 385 responses and then be marked as complete. And then actually you can look at those results and feel really confident in those results because it's all built into the platform of getting the exact amount of responses that you're looking for.

JH - 00:17:01: Alright, quick awkward interruption here. It's fun to talk about user research, but you know what's really fun is doing user research and we want to help you with that.

Erin - 00:17:09: We want to help you so much that we have created a special place, it's called userinterviews.com/awkward for you to get your first three participants free.

JH - 00:17:20: We all know we should be talking to users more, so we went ahead and removed as many barriers as possible. It's going to be easy, it's going to be quick, you're going to love it. So get out there and check it out.

Erin - 00:17:29: And then when you're done with that, go on over to your favorite podcasting app and leave us a review, please.

Erin - 00:17:37: We talked about using in product surveys across different journeys, different sort of lifecycle stages, onboarding or specific features, different users. Do you ever use it for more sort of generative research? Like to just open ended? I want to get a sense of what should we build next or to get some ideas going or is it really most useful for that? As you mentioned earlier, that sort of what happens post launch and how do we drive adoption? Is it mostly useful there or are there other ways to use it that are maybe a little more generative?

Ryan - 00:18:13: There are some use cases that we're starting to see, and we do have a category of templates now around recruiting. And so people that want to recruit from their current customers is a growing use case. And so if there's a specific group that you want to reach out to, you can deliver an in-product survey asking, hey, you've already set all the targeting in place. You've already gotten it down to a specific group, and we want to talk to you for 20 minutes or 30 minutes about a specific topic or show you something that we're working on or run an idea behind by you. But at the same time, a lot of our customers, ourselves as well, also want to use other means and talk to prospective users, do market research, talk to people that don't use their product. And that's where we're excited about the User Interviews integration with all of you and thinking about how you can use concept testing with Sprig to test mockups and ideas and what you're going to build next. But also reach out to people that might not even use your product today and really source those people and set up those conversations. And so it is more for the evaluative research, but there is that small recurring use case in the recruiting. To me it facilitates some of those conversations that can happen in the generative research.

JH - 00:19:28: Nice. I was just going through our notes. I know you had an example in here about how you can do some more sophisticated things with this, where you can start to kind of pair the in-product research with something like an A/B test, right, and get some qualitative signal on those results. Could you maybe share how you think about that and what some of the utility is there for the more advanced people in this world?

Ryan - 00:19:47: The fairly niche use case that's starting to emerge is any company at scale or quickly growing is going to be feature flagging or A/B testing the changes they're making to their products. They need to know is it having a positive impact on the metrics of business conversion. Potentially a negative impact. And what we've seen researchers really get excited about and really adopt in product research for is integrating the in-product surveys into the A/B test. And we've all had that experience of you go to cancel service, and you can't find the cancellation button, or you have to call in, and there's only a two-hour window. Or maybe someone makes product changes in a way that is averse to your experience, and you have a negative experience, and there might be running A/B tests. And A/B test is really the north star for a lot of the product teams today. But what we see missing, and I think many of us really yearn for as even end users of products, is that these products are actually curious about how we perceive the change. Are we happy that the cancellation button is missing to cancel that subscription? Or are we happy with that new redesign, even if it's improving the conversion of that product or maybe helping grow revenue for that business? Because ultimately, we're the ones using the products. It's fitting into our lives. We're the ones often paying for the products. And so we've seen a lot of researchers actually start to integrate in product surveys into each of their A/B tests that they're running. And if an A/B test might look to perhaps a new redesign being rolled out and they're looking to understand, is this redesign going to improve the experience for the end users? They might go into Sprig and run a usability survey and see is our users perceiving this new redesign as easier to use than the previous version? And they're able to then combine the research data, the user sentiment, alongside the business conversion. And a lot of researchers now are actually bringing that data to their team and saying, hey, we ran this A/B test. It might have a positive business impact, but our users are actually less satisfied with that change that was made. Let's actually dig in a little bit more before we really roll this out. Move forward, because that user sentiment, if it's negative, will have long term detrimental impact to that business growth. Even though that business conversion data might have a short term upside and positive impact, it might ultimately drive those users away over longer periods of time. So it's a more advanced use case that I didn't expect and I think it's getting really interesting in the world of in-product surveys as really kind of combining that sentiment data with the conversion data and it really helps ultimately deliver an improved user experience that the users prefer as well as improving that business impact.

JH - 00:22:45: Yeah, there's going to be so many interesting things there. Just like anyone's done a lot of A/B tests, you're going to have an example where you are really excited about the new variant in the test and you see the data and it's kind of like flat, like there's no clear thing there if you have that quality signal. But there are times where you see a really positive response to the new version, but there's like one thing missing or something right, that you overlooked. And instead of calling that test of loss and rolling out the control, you probably just iterate on it and find a significant win. So it seems like a really cool combination.

Erin - 00:23:12: Great. So, we've talked a little bit about in product surveys and research and how you can layer that together with A/B testing and some other methods for more complex use cases. What about when not to use this? When is this the wrong, the hammer for the nail on the wrong tool for the wrong question? When you want to think about maybe visiting some other methods.

Ryan - 00:23:36: Certainly, when you have those bigger questions that require deeper discussion, more fidelity. Nothing beats face to face talking to a customer resume, talking to customers in person, actively watching them use your product and asking those questions along the way, when there's usually larger challenges that need to be solved, larger strategic questions. In particular, should we enter a new market? Should we launch a new product? That's really the power of generative research and it could be a diary study, it could be thinking about setting up five or ten moderated discussions with those customers. And in product research is certainly a compliment to the data sets that teams are getting today, but by no means a replacement by a lot of the traditional techniques. And going back to the founding story, I saw really great sets of tools for the moderator for the recruiting, for connecting with prospective customers and panel provider customers such as User Interviews. And so really focus on that post launch value to phase with the Sprig’s core product and product surveys. But there's only other techniques and that's where there's a lot of other great tools for those other techniques and certainly an area that we're not looking to really tackle or say that we're an all in one research platform. It's really just one pillar of research. We did not get a lot of interest from our customers about moving into thinking about the evaluation of research, but for their next big idea and thinking about I'm working on a new design. I'm looking at a new concept test, and I want to hear from end users, prospective users for maybe a panel about what people perceive this new design to be before we actually start the development and start writing the code. And so that did prompt us to launch a concept testing and usability testing solution in Sprig. And we launched that in August. And it's always an opportunity for our customers to not only look at, we launched something, let's see what users think, but we're going to launch something, let's see what our customers think about what we plan to do and really kind of narrow it down. And so we've looked at the research journey and really broken it down into four different steps from idea to having something in the hands of your customers. First is the concept testing. So really, what is the right way to solve this problem? Really? Let's put all the different solutions on the table. Usability testing, we really narrowed it down to a single solution and we wanted to test different ways to solve this specific solution, maybe different user experiences, different designs, different approaches of how to solve that problem. And then in product surveys, so integrating into the feature flags, the EV testing, as you start to roll that out, how are users perceiving that change that you're making, is it well received by those end users? And then lastly, the continuous learning. And so running continuous surveys to really measure that existing experience that's already in place, that maybe you're not actively working on, but you might be thinking about what are the areas that we want to really tackle next as an organization? And that could start to surface some of the challenges and where you want to focus.

Erin - 00:26:59: That's great because as you were talking earlier about, you can run an end product survey like, let's say you want to focus on onboarding and so you can get some insights there. There's also that you don't know, what you don't know problem of where should we focus next? What opportunities are we missing because we aren't paying attention to them. And it sounds like some of that continuous research could be really useful there.

Ryan - 00:27:22: Yes, exactly. And getting those constant continuous insights is what I was looking for as a product manager prior to Sprig. And a lot of researchers that we talk to want to deliver more continuous insights to their counterparts. And that's where the in product research is powerful. But also sometimes you just want to go deeper and have those conversations. And that's where, again, nothing beats the live conversations.

Erin - 00:27:47: We're hearing a lot of Teresa Torres and her work around continuous research, and the concept has been around for a while, but we're hearing a lot of buzz about that right now. And there seems to be growing interests and momentum around continuous research. Are you saying that to any theories on why that might be?

Ryan - 00:28:05: Absolutely. Seen that on our end as well. And if you think about 15 years ago, ten years ago, five years ago, how you shifted to continuous development and continuous deployment. I remember going to fry's electronics as a kid and buying software on a CD or a DVD and coming home and installing it on the computer. And next year you decide, do you want to get the new version or do you want to skip that version? Do you want to buy the upgrade? And when you think around where software is today, changes are happening to products every day, every hour. And when you think about the learning loop, the research loop is often six weeks, eight weeks, twelve weeks, the more that we can really compress that loop. And so you make the change. You learn from the change, you make the change, learn from the change. You're ultimately going to have compounding improvements of your product if you can tighten that loop. And so I think Teresa Torres is really a pioneer in the space of really encouraging continuous discovery and continuous learning and how we can actually all learn from our customers and our users, ideally as fast as we're changing our products. And so those techniques that she has been talking about have been really interesting and certainly something that we're really excited about and have been promoting as well.

JH - 00:29:27: Nice. Yeah, it's been a cool trend. I have a random kind of use case that came to mind. I'm curious if this is something that people do with it as well. We're talking about in product. What about in marketing site? Like, do you do anything there where people see someone leave the pricing page and you throw an intercept and maybe find out that there's something that was confusing or anything like that? Do people use it in that way as well? That's a little bit more up the funnel, but still, like, contextual.

Ryan - 00:29:49: We do. Shift.com is one of our customers and they've been running in product research on their homepage on Shift and really asking people right up front specific questions that they have. We're currently running in product surveys on our homepage, and it's not just because it's our own product. We generally have questions about how to strengthen our messaging and our positioning. And so we're actually getting insights. And I late last night, check the data, looked at the results and have some interesting, AHA, moments of what the users are telling us right from our homepage. The prospective visitors and so certainly see it as a great use case up the funnel to engage people as we're looking at those pages, as we're looking at pricing, they're considering whether to sign up. What are those hesitations, what are those blockers, what are those concerns, what's important to them? And the solution, how are they evaluating solutions? Can really unlock some breakthroughs for teams as they think about improving that homepage conversion.

JH - 00:30:48: Cool. Yeah. So don't take the in product lately, I guess, too literally. There's lots of places that you might be able to use this that are get you useful insights as well.

Erin - 00:30:55: Everything's a product, right?

JH- 00:30:56: That's true.

Erin - 00:30:58: Well, so as we're recording, we're wrapping towards the end of the year and when this launches, it will be the beginning of 2023. And so, it's a good time to zoom out and reflect and think about the future. And I'm curious, what are you excited about thinking about in your head? Of course, in tech everyone's kind of thinking about the looming downturn and some things around that so curious if you're thinking about that or more positive trends in research or where your head is when you think about the year ahead and research.

Ryan - 00:31:34: It's more of the latter. I think it's just exciting seeing what I've seen. Research 2.0, it really kind of emerging. New tools, new capabilities, new techniques. People care about research more than ever and I think it's exciting to be part of the industry and really looking at what's possible and people I think in a competitive environment, it's getting easier and easier to build products and solutions. And every time I go on product and I see 50, 80 new products, 100 new products every week and you're just seeing no code and how quickly it is to build and bring new products to market. Everyone's getting funded these days, $5 million seed rounds are not difficult to come by. But it is ultimately coming down to the battle for the customer, battle for the user and that's where research is becoming increasingly more critical. And so I've been really energized by just the growth of the category and the growth of space importance of research. Or maybe five or ten years ago people weren't doing it because they were the only product in their categories and now they might have five or ten competitors that do exactly what they do. And really the path to winning, the path to be the category leader is through research as really that means to get there. And so I've been continued to be very optimistic and bullish on the category and I think the macroeconomic headwinds will hopefully subside in early 2023 and then we'll be all off to the races and back on track.

Erin - 00:33:08: Either way, you need research, right?

Ryan - 00:33:10: Exactly.

JH - 00:33:11: I'm excited for the User Interview/Sprig integration flood that way. Hopefully that'll be out in the wild by the time people.

Erin - 00:33:19: It should be coming, like, this week, right?

JH - 00:33:20: Next week soon. Yeah.

Ryan - 00:33:22: I know. The team is putting the finishing touches on it and our team is love working with your team, by the way, and share Slack channel in there, working together on building that.

JH - 00:33:33: It's really cool.

Ryan - 00:33:34: And we use User Interviews quite a bit. And so, something that we actually wanted ourselves and our customers all the time asking for panels and so really excited about that integration and partnering together.

JH - 00:33:47: Listen to this. Check it out.

Erin - 00:33:49: Yeah, check it out. Either of our sites should be promoting it.

Ryan - 00:33:53: Cool.

Erin - 00:33:53: Awesome. Thank you so much, Ryan. It's great to have you on the show.

Ryan - 00:33:56: Thanks for having me.

JH - 00:33:57: Yeah, thanks for the blast.

Erin - 00:34:01: Thanks for listening to Awkward Silences brought to you by User Interviews.

JH - 00:34:06: Theme music by Fragile Gang.

Episode Video

Creators and Guests

Erin May
Host
Erin May
Senior VP of Marketing & Growth at User Interviews
John-Henry Forster
Host
John-Henry Forster
Former SVP of Product at User Interviews and long-time co-host (now at Skedda)
Ryan Glasgow
Guest
Ryan Glasgow
Ryan is the Founder and CEO at Sprig (formerly UserLeap), a research platform that provides advanced usability testing and in-product survey capabilities to companies such as Dropbox, Loom, and Shift. He is the author of The Customer‑Obsessed Product Manager's Playbook and current host of the People Driven Products Podcasts. Ryan has a strong background in product management. Prior to founding Sprig, he was the Group Product Manager at Weebly, Product Manager at Vurb, and Product Designer at Extrabux.