
#164 - UX Lessons from a Decade Researching AI with Jess Holbrook of Microsoft
Jess Holbrook [00:00:00]:
Just about any researcher should be able to prototype at this point with the tools that are out there. There's lots of good articles, there's lots of good tools. Spend an afternoon trying to figure it out. It's like you can probably prototype quite a bit more than you think you can. But then with all that agility and with that ability to prototype, we can't drift because then we'll drift off into this space of novelty. And this is really cool. And there's a place for that for sure. I'm not like, don't invent invention is very fun.
Jess Holbrook [00:00:30]:
I think more researchers should think of themselves as inventors than they do. I really believe in this duality of discovery and invention. And we're kind of always going back and forth between discovery and invention, but we also need to really stay anchored in those enduring human needs and not drift from them and get into the space again of like I mentioned earlier, the like, my life is awesome and you made it 1% more awesome.
Erin May [00:00:58]:
Hey, this is Erin May.
Carol Guest [00:00:59]:
And this is Carol Guest.
Erin May [00:01:01]:
And this is Awkward Silences. Awkward Silences is brought to you by user interviews, the fastest way to recruit targeted, high quality participants for any kind of research. Hello everybody and welcome back to Awkward Silences. Today we're here with Jess Holbrook. Jess is the head of UX research at Microsoft AI and super qualified to talk about today's topic of what we can learn from traditional UXR about AI research, maybe what we can't and just provide a lot of perspective over many years of doing AI research. So, Jess, thank you so much for joining us today.
Jess Holbrook [00:01:43]:
Thanks, Erin. Thanks, Ben. Yeah, happy to be here.
Erin May [00:01:45]:
All right, well, let's jump right in. Like many UX researchers, you have a background in social science. Unlike many UX researchers, you have a lot of experience researching for AI applications. Could you tell us a little bit about your journey from sort of psychology and academia to tech and specifically to AI research?
Jess Holbrook [00:02:08]:
Yeah, yeah, happy to. I mean, before I get started too, I always feel like this is one of the values of our discipline is that so many people come from so many different backgrounds and I've yet to meet, you know, a UX researcher, even somebody in UX who had a straight line through and I, and I just kind of, I always love to reinforce that that's one of our strengths.
Erin May [00:02:28]:
A hundred percent.
Jess Holbrook [00:02:29]:
Yeah. So I went to school for psychology and social psychology and I thought I was going to be an academic and, and all that. And then the between year one and two in grad school for me, I was, I needed to get a job because I had rent and things like that. And so I got an internship at what I think may have been the only tech startup in Eugene, Oregon at the time. And as an intern, I'd go in each day and they'd kind of tell me what my job was and it would vary from day to day. And one day I go in and they say, hey, we want you to redesign the ads page for our website and we want you to use Dreamweaver. And depending on the demographics of your audience, some people are nodding and laughing about Dreamweaver and other people are like, what is Dreamweaver? You know, you can search it. And so I started to just redesign the page and I essentially started like commenting out code and I had no idea how any of this worked.
Jess Holbrook [00:03:22]:
And I was like, oh, if I get rid of that code, this thing that was in the center of the screen moved to the left of the screen. Oh, if I comment out that code, this image goes away. And I just kind of learned some real basics that way. So I went back to my advisor. At the time, I didn't even have the words for design. I said, I like psychology, I like technology, and I like art. Basically, what should I do? And he said, oh, you should do user research. And I was like, oh, what's that? And he connected me with an HCI professor at the school.
Jess Holbrook [00:03:52]:
This is another thing. I've always tried to invest in mentorship of people because she cared about me when she didn't need to, she invested in me. She brought me into all of her classes and really like built my HCI foundation. So it's very, very fortunate. And then from there I came and you think, you know, I had a PhD in psychology and I had an HCI degree. And I came and I applied to about 50 different places and I got rejected from 49 of them. I know that the job market's pretty rough out there for a lot of people. I hear you.
Jess Holbrook [00:04:26]:
It can get bad. You'll find something. It's going to work out. So I got a contracting role in the hardware group at Microsoft and after a few months there, I got a full time role in the Windows group at Microsoft. So I was there for about five years and then I went to Amazon for a bit to work on Amazon.com, like literally the site. And then I moved to Google to work on the cloud platform. Actually, cloud was kind of really taking off. This is about 2013, cloud's taking off.
Jess Holbrook [00:04:55]:
It's kind of the space to be a engineering director, research director started at Google. And he kind of came and gave this presentation. His name's Blaze Aguirre Yarkis. And he kind of said, the deep learning revolution is on. AI is happening. And I said, this sounds really cool, I want to go check that out. So I moved over to his team when it was him, 10 engineers and me, which was at the time, as the AI wave has taken over, it looked like a good call to switch to AI. At the time it seemed like a very not good call.
Jess Holbrook [00:05:29]:
Everyone, everyone who was like, oh, I'm trying to ride the next wave was focused on cloud. That was the place to be. That's where all the, the teams were growing. That's where all the career opportunities were. And so for me to hop ship was actually seemed pretty weird at the time of like, they don't even have ux. You know, they were like, they don't even have UX over there. Like, what do you, you know, some people are like, what are you doing? But that turned out to be a really great call for me personally, like in my own personal interest and growth. And so one of the things that, you know, how do you get into AI? One of the things that I try to tell people is, you know, I didn't care about AI before I worked over there.
Jess Holbrook [00:06:06]:
You know, I worked with people that got PhDs in AI in their 80s and have been caring about this for decades. Right. I didn't care until I got over there. And then I was like, wow, this is super interesting. And reflecting on what has helped form my point of view a lot that I really appreciate is at the time, I was the only person in the UX for a short time. Like, other people showed up pretty soon after me, but like, I would just go to the research scientist meetings because there weren't other meetings to go to. Like, those were the team meetings and we would be just reviewing AI papers. Like, we'd just be reviewing papers.
Jess Holbrook [00:06:43]:
And so that really early on just built into me that like, oh, I should just be reading the AI research papers all the time. And like, I've been doing that for, you know, whatever, 11 years now or 12 years now. Good Lord. And so that was, that really kind of shifted that I was always trying to stay very close to the technical understanding. You know, I'm not a computer scientist, I'll never be at that level of depth, but I try to keep close there. Yeah. And then from there we started to like really approach the UX of AI from the ground. Up.
Jess Holbrook [00:07:15]:
So we were building products with the new models that started working the computer vision models, some of the earlier language models. This is way before the large language models. And from there we started to kind of induct some principles out. So at the time I was working with another person named Josh Lovejoy a lot and he and I were working on these, like, materials around the UX of AI and we would, we called it human centered machine learning back then because it wasn't even really talked about is AI a ton. And then we formed this group called Pair or People plus AI Research with Martin Wattenberg and Fernando Villegas, who are two really exceptional research scientists who kind of really straddle like HCI and AI and data visualization and all these things. While still at Google, we had really framed our efforts as human centered AI. The momentum and movement around responsible AI really came on strong and kind of like absorbed the human centered AI efforts. And so everyone was kind of referring to it as responsible AI.
Jess Holbrook [00:08:16]:
And so I led the responsible AI UX team there for a little bit and then I moved over to Meta and I did the same thing there, running the responsible AI user research team. And then I led the generative AI user research team there. I kind of chose to get back to closer to product. And then I've been at Microsoft for about the last eight months leading UX research for their AI division.
Erin May [00:08:38]:
Full circle. Back where. Back where it all began.
Jess Holbrook [00:08:41]:
Well, the joke, you know, the different. All the big tech companies at the time, I was like, wait, I kind of have to go somewhere else, right? Like, I have to collect all the Infinity Stones or the Pokemon, depending on which you. Which you prefer. Like, I can't go back somewhere.
Erin May [00:08:56]:
Yeah. What are you missing? Maybe Apple, Netflix, but yeah. So this is like what, 12, 13 years ago with the first AI centered job?
Jess Holbrook [00:09:02]:
Yeah, I think it was. I think it was 2014. Yeah. So 11 years.
Erin May [00:09:08]:
Yeah. Okay, 11, 11 or so years ago.
Jess Holbrook [00:09:10]:
Yeah. Yeah.
Erin May [00:09:11]:
And you mentioned that, you know, you were kind of in cloud at the time. That was the hot thing. Everyone wanted jobs in cloud. And you're like, I'll go zag while everyone's zigging. I'm just curious, is there a reason or are you up for an adventure? Was there something about AI that you, you said you kind of got into it once you were there?
Jess Holbrook [00:09:26]:
Yeah. I mean, a piece of advice for folks who are, you know, I recognize that there's some luck and some privilege in this, but I, I genuinely have always optimized for interestingness and am I engaged in the topic and can I, like, not stop thinking about the thing I'm working on in a good way, like, not in that bad way, in the good way.
Erin May [00:09:46]:
Right.
Jess Holbrook [00:09:47]:
And yeah, when I started to learn more and then I realized what my really, really early understanding of what was happening with the deep learning revolution and what. What was about to, you know, the journey we were about to go on, I was like, well, this is by far the most interesting thing. And I've done other zags like that. When people do a lot of resulting, it's like, oh, it worked out, or we got to do some cool stuff. So you saw that ahead of time and it's like, no, all of it is. All of it is like, it's all a gamble. Well, success and failure almost always look exactly alike for, you know, 90% of the journey.
Erin May [00:10:22]:
Right.
Jess Holbrook [00:10:22]:
And then sometimes it works out. But, you know, also, I'm not posting about all the dumb mistakes I made and the stuff that didn't work out or anything like that. Right.
Ben Wiedmaier [00:10:32]:
Jess, we hear a lot about and publish a lot about here at User Interviews the way that AI is being used for research. We wanted to have you on for many reasons, but one of them is that you've been doing research on AI. You mentioned here that you've been inducting some of the principles that make for people centered, responsible, useful, effective AI. Could you linger a bit on what some of those principles are? Some of the ways that researchers who might not be directly working on AI products but might be asked to in the future or are thinking about doing some of that work, what should they be thinking about?
Jess Holbrook [00:11:06]:
There's a lot of things that have come up over the years and there's a few here. I mean, I think one of the harder parts with research or just being in the AI space right now is this tension you have about staying up to speed on things but not being distracted by everything. So I'll meet with people and they'll say, how do you stay up on what's going on in AI? And I'm like, you know, I kind of say, like, I don't. Nobody can. Like, nobody knows at all if they say they are, like, they probably have something to gain from you believing that they are up on everything. And so there's this weird tension you have to have where it's like not getting distracted because there's headlines every week. Every single week, there's some headline that's trying to grab you and say you need to shift everything you're doing and go in A different direction or blah, blah, blah. But you have to stay very aware.
Jess Holbrook [00:11:55]:
And then over time, you have to kind of build up your intuition and your ability to say, okay, how much does this matter? Or not. When the large language models or few shot learners paper comes out, that's massive, right? And it's like, we really need to pay attention to this and not only for the output of the product, but also for things like the scaling laws that it revealed. Um, and these are things where it's like, oh, this is gonna last. This is shifting the trajectory that we're all on versus, you know, there's thousands of papers published, you know, a month at least, probably weekly. And so it's kind of parsing through, like, what matters. I think also one of the things that, that we've tried to do is, is like just double down on the really deeply human side of things. And I, I know that some people, I know that that might be a common phrase, but like, we've done a lot of research that focuses not just on problems, but on aspirations. So we really focus on, we talk to people about.
Jess Holbrook [00:12:57]:
Because at times it can be very hard to talk to people about new AI capabilities. It's like, oh, AI can do this now, what do you think? That's a pretty bad piece of research. And so what we try to do is really anchor things not just in the problems people currently have, but what kind of people do they want to be? What do they want their life to be? Like, what are they trying to accomplish? And that feels like it really kind of pulls things back from like, hey, there's this cool new capability to landing it again, not just in like, oh, you took away this little paper cut I have in my day, which is, that's great. I don't want more paper cuts. But also, are you like, actually helping me learn new topics that I want to deeply understand? Are you helping me with difficult interpersonal relationships that I'm having that I really want to preserve? I'm in financial rut and I don't know how to get out. How do I do that? You know, trying to do these things that. One of the things I've, I've done in our group is I've banned the planning the trip use case. I'd actually encourage this.
Jess Holbrook [00:14:01]:
I'm going to make a pitch to your audience. Unless you're, I don't know, unless you're@booking.com or some Expedia ban the we need to plan a trip use case. I don't like it. It's A default that a lot of us go to because most of us have disposable income to go on trips. And the challenge we have is, where should I go? Instead of, you know, it's. We really try to avoid use cases. I call them. It's like use cases where your life is already great and you're trying to make it 1% greater.
Jess Holbrook [00:14:30]:
That's not what we're. What we're shooting at. So I think another part of the research fundamentals is I kind of sometimes call it as, like, bring some drama into the use cases. Like, are you alleviating pain and anxiety and hardship for people, or are you making people whose lives are already great 1% greater? I think another thing that we're returning to is also finding, like, real fans. So AI has a lot of people who use it who are real meh. You know, who are like, yeah, I use it, it's fine, or whatever. And so we're really trying to turn to these, like, you know, we used to call them power users, but even that's not exactly right, because in AI, that can mean a lot of different things. But it's like, who are your fans and who are, like, excited and who would be heartbroken if the.
Jess Holbrook [00:15:26]:
If your product went away and trying to understand that and then trying to, like, expand a lot of that. And then I'll give one more that we think about a lot, which is, you know, with AI, you kind of have the three possible A's of AI. You have augmentation, assistance and automation. And these are different ways that you can approach what you're building. Are you augmenting human abilities and extending them and giving people, you know, some people call it superpowers or, you know, I think it's extensions or compensating for our natural limitations. You know, are you providing assistance? So somebody is going to kind of like, you know, an agent's going to do this for you, but you're not really expanding your own abilities, or are we going to automate it away? And all three have their place. You know, a lot of people really like dishwashers. They automate.
Jess Holbrook [00:16:14]:
That's great. A lot of people like, assistance with some stuff. They don't really want to do it themselves. But I try to remind people I have this one talk where there's just a big slide that says people don't hate work. And so there's a big part of like, watch what you're automating, and especially watch what you're automating. If you're automating something. Do not automate away people's identity. Because there's a lot that people tie their identity, their dignity, their worth to in their work.
Jess Holbrook [00:16:43]:
And we're just seeing all these companies and all these things show up and they're like, hey, don't worry, you don't have to write poems anymore or whatever. And it's like, okay, that's not a pain point I had and that's a thing I like to do and I enjoy. And so there is really this connection between identity. I'll reinforce the term dignity. That was for a while I actually was thinking, you know, we kind of talked about human centered AI at Google a lot and there was a time where I was like, I think I want to change it to dignity centered AI. I think that that is actually the cornerstone of just about the rest of it is preserving human and individual dignity and worth and improving everything around that.
Ben Wiedmaier [00:17:28]:
Jess, you alluded to my next question here, which is some of the things that UX researchers should let go of or stop doing related to AI. And you talked about the wash of information and the ubiquity of like, well, I don't know that. So I can't, I don't know what I don't know and I have to find everything that I don't know. What are some other things that user researchers thinking about AI systems should maybe not jettison completely, but maybe loosen their grip on?
Jess Holbrook [00:17:53]:
I mean, one is the forecasting. So you know, on our team we do front end frontage foresight, trying to look envisioning work, you know, and years ago it was like you're talking about something 10 years out, you know, and then it was five years out and now we have these visions that are two or three years out. And I just said like at any point in my career working on AI, if I had guessed what would be relevant two to three years out, I would have been wildly wrong. And I don't think that that's slowing down, I think that's increasing. And so we're trying to find a new relationship between north stars and visions and what we're doing now. And I think it's even pulled into this almost like year out thing. And almost you just have more like the need for directionality versus concrete vision. Meaning, you know, if you're like, we are going to really focus on augmentation and augmentation means we are going to help people accomplish tasks, but they will always be able to do it by themselves and they will learn when they use our tool.
Jess Holbrook [00:18:54]:
That's A direction that you can kind of head toward without the need for these, like, waypoints. You generally need some concrete steps, but I think that the reduction in the ability to forecast is pretty big. I think another thing that we're figuring out is because this space is evolving so rapidly, you need the matching of high agility and enduring insights. And what I mean by that is there's so much going on, there's so much movement. You have to be quite agile. You have to be able to do research quickly. You have to be able to pivot. I'm seeing research teams become more malleable again, where, like, we went through a phase where you kind of have, like, this deep expertise in something, and now people are just moving more often.
Jess Holbrook [00:19:41]:
And so I think the more you lean into that, the better of, like, gaining this more, like, generalist ability. We're seeing more and more of our team there, and we're finding the right balance between the generalists and the specialists. But, you know, in our. In this space, you know, action is information. And creating a prototype, which is, of course, it's like, almost is trivial at this point. And learning about a new experience is sometimes the most strategic thing you can do because the space is so quickly evolved, there are no patterns to match that you can go to. That actually leads me to another one is just about any researcher should be able to prototype at this point with the tools that are out there. There's lots of good articles, there's lots of good tools.
Jess Holbrook [00:20:22]:
Spend an afternoon trying to figure it out. It's like, you can probably prototype quite a bit more than you think you can. But then with all that agility and with that ability to prototype, we can't drift because then we'll drift off into this space of novelty. And this is really cool. And there's a place for that, for sure. I'm not like, don't invent invention is very fun. I think more researchers should think of themselves as inventors than they do. I really believe in this duality of discovery and invention.
Jess Holbrook [00:20:53]:
And we're kind of always going back and forth between discovery and invention, but we also need to really stay anchored in those enduring human needs and not drift from them and get into the space again of, like I mentioned earlier, the, like, my life is awesome, and you made it 1% more awesome. And so we're doing a little bit more of that. Of these core research assets that are like, these are the enduring needs, these are the hard problems. Sometimes I'll talk about them as, like, these existed well before all of our technology, you know, and they. And these are the really hard ones to solve. And so can we stay anchored in those, you know, and these are things about like human accomplishment and human connection and feeling self worth.
Erin May [00:21:36]:
Yeah, the need for dignity, like you.
Jess Holbrook [00:21:38]:
Mentioned, the need for dignity. And then not to be all super serious about it, it's like we all want to be entertained, you know, too. It's like, you know, but. But generally we all know when we're looking at, as my girls call it, brain rot versus something where we're like, oh, I'm becoming a better version of myself by watching this.
Erin May [00:21:56]:
I bet that psychology background is coming to use with that quite a bit.
Jess Holbrook [00:22:00]:
Yeah, it is interesting how much it becomes more and less. It comes in these waves of prominence in the work. I feel like.
Erin May [00:22:08]:
Yeah, you mentioned that prototyping is so accessible now that researchers can and should be doing it as part of their work. Are you using figma or using other tools? Are there next gen kind of AI tools that you're using there?
Jess Holbrook [00:22:20]:
Yeah, I mean, this is another part of like if you're an AI. I try to push myself, but I'm trying to push deeper and deeper into all the tools and using a lot of different tools on a daily basis. Not to pitch my current product, but I do use Copilot for this. Experimenting with different tools like Cursor or Warp that actually allow pretty rapid code development. And then, yeah, Figma for designing. A really cool one that I've enjoyed a lot lately is Recraft AI. And so that is for professional. It's image generation for professional designers or graphic designers.
Jess Holbrook [00:23:00]:
And I love. It's this great marriage of generation and control. So I was able to create, you know, a set. We had like a team on site and I was able to create like this great set of assets that were very unique to the mood I was trying to invoke with the team. And you know, it didn't take me long at all to do it, but I had the control I needed to get it exactly the way I wanted it.
Ben Wiedmaier [00:23:23]:
Yeah, it does. A lot of the articles for user researchers are like, frame your prompt this way and a lot of it around the. I'm not sure what I'm going to get. You know, the iterative nature of that. I want to linger for a second on your team because you. You said at the start of your career it was you and 10 engineers and then you were talking about specialists and generalists. What is the ideal team structure for you? Do you have folks embedded in tech Stacks, how are you? You know, granted, I know you're speaking about your team, but you've built teams at several different companies, I would argue, successfully. Have you found an approach that you like? It sounds like you need to be close to the technical experts.
Ben Wiedmaier [00:24:00]:
How are you organizing user research with your stakeholders?
Jess Holbrook [00:24:04]:
Yeah, I put together some teams I'm really, really proud of. Made of some pretty exceptional people at a couple, you know, I don't think I screwed them up too bad. I think there have been some really great crews in there. I'll actually go just one step before your question, which is kind of like, who do you bring onto those teams? One is, I don't require that people have had AI experience for every team. Do you want a mix of people that have had really deep AI experience and people that haven't to have that mix of views? For sure. It's also kind of interesting as now everybody has AI experience, so. So we've had to actually kind of treat it like, have you worked on it before? Like the ChatGPT moment or not? You know, when I was at Meta and I was leading the Generative AI team, you know, I'd say, oh, why? Why do you want to. Why do you want to work on our team? And they're like, oh, I'm just really excited.
Jess Holbrook [00:24:54]:
I want to work on Generative AI. And my kind of joke was like, oh, don't worry, you will. Like, it doesn't matter where you are in the company. Like, you will. It's just you want to do the particular thing we're doing over here. And so there's that. And then the second part is I really try to build teams as much as possible with people with a point of view. And by that I mean, you know, do they have a strong point of view on something? I don't have to agree with it at all.
Jess Holbrook [00:25:18]:
But do they come to a point of view and will they advocate for a point of view? Because I think that that's something that's very, very important with researchers. And, you know, I don't. I'm not looking for kind of people that are generally competent, but that don't bring a point of view to the situation. So bringing people together, if we have a group of people like that, we always try to embed as many people as possible. So a lot of times we've always reported in functionally, but then people are embedded into the products. I tend to even do that earlier than other people do. So some people try to do a service model where you're kind of like moving people around later. I try to embed earlier and create priority and create inconvenience if there needs to be.
Jess Holbrook [00:26:05]:
So we have to have priorities. I really try to avoid peanut buttering people around. Nobody's happy when you do that. And it's kind of that thing of like you're avoiding a few tough conversations and you're signing yourself up for like a year or two of pain. And so we did that. We did that on the current team. You know, when I joined, we didn't have the numbers that we had. We had a great crew, but we didn't have the number of people that we needed.
Jess Holbrook [00:26:32]:
And so I said, hey, I'm going to move these people over here. These things are going to be unfunded. Do we all agree on the priority? And there were some tough conversations, but when you frame it in terms of we all agree that these are the most important things though, right? And everyone goes, yeah. And you go, okay, well that's where I'm going to put my people. And that's no different than if I was a PM or an inch lead. Like, you know, if I, I tried to reverse it, I was like, let's imagine we only have a few PMs for the whole org. Would you try to have them have like total span of control of everything or would you try to focus them on the most important things? And it's kind of like you'd obviously do the latter. And then within each team we have, you know, people will be embedded.
Jess Holbrook [00:27:13]:
It depends on the level. We're, we're trying to find the right level in the way that we're organized. That allows people flexibility on what they work on, but also to build domain expertise. So it's kind of like you're moving the thing I mentioned earlier about the generalists moving around. You're more moving around within kind of a domain. We might have a group that's like really focused on say like growth and monetization or something like that. And it might be a small team, but we would then have a check in regularly and say, well, nobody's like going to do anything outside of this kind of growth and monetization space, but what you're doing within there might change pretty rapidly. And then we also have a few like centralized specialty roles, like more quantitative focused UXRs.
Jess Holbrook [00:27:52]:
But we also have people that have super strong quant skills embedded as well.
Erin May [00:27:57]:
Awkward interruption. This episode of Awkward Silences, like every episode of awkward Silences, is brought to you by user interviews.
Carol Guest [00:28:05]:
We know that finding participants for research is hard. User interviews is the fastest way to recruit targeted, high quality participants for any kind of research. We're not a testing platform. Instead we're fully focused on making sure you can get just in time insights for your product development, business strategy, marketing and more.
Erin May [00:28:22]:
Go to userinterviews.com awkward to get your first three participants free. Are the specialties you need in researching for AI? Are they different than what you might find in a different context? Is it really that interest and knowledge of AI that is the important thing versus the methods and the research specialties?
Jess Holbrook [00:28:46]:
Yeah, it's been one of the more fun things is that you run into not just research specialties for each person. Yeah, we're either weighing do they have specialty in AI, do they have specialty at the intersection of AI and something else or do they have a specialty outside of AI, but we really need an AI. And so what I'll say is like there are some people on our team that have been working on AI for years and they have a deep expertise and especially at Microsoft, they just know how it's shown up in our products. They have the history, they're really, really knowledgeable in that way. Then we brought on people who say have had a lot of experience with especially generative AI in Gen Z use cases. And so we'll say oh okay, we really want to build that up. Or maybe they have experience really focusing on co creation methods. And so we go, okay, we want to bring that into our team.
Jess Holbrook [00:29:39]:
And then kind of mentioning the growth part before too, we might say, oh, this person has done really amazing work working with startups and helping them grow from their first few customers to their first wave of people. And so we go, they don't know a ton about AI, but we're going to level them up and teach them. And then even right now I'm actually really lucky. We have design and video producers on our team who are working super hard to actually create these resources so the whole team can be closer to users and customers all the time. So like really kind of rethinking how we use like just like artifacts like user videos and how do we make data like way more accessible to the team and how can you know everyone on the team, regardless of your role, feel like at any moment you could get customer contact or watch a customer video or connect and so, and so we're invested there. And then in previous projects I've worked on, we brought on specialists to help us really understand, I guess like kind of the use cases and the artistry of things. So when I was at Google, we worked on a project called Google Clips and it was a AI powered camera. And we had three different photographers and filmographers on our team.
Jess Holbrook [00:30:56]:
And so they would work with us, the UX team and the research scientists to like, explain what makes good photographs and what makes a good collection of photographs and how you tell narrative and how you don't change perspective too dramatically between images unless that's your goal. And so there's a lot of learning from communities like that that we're trying to do.
Erin May [00:31:19]:
Yeah, it's not the paradox, but the interesting of AI feels so, I don't know, modern and futuristic, but yet it's really bringing back those subject matter experts. Right. Whether it be photography or the research methods. Still pretty applicable there.
Jess Holbrook [00:31:35]:
Yeah. And I think one of the most important parts is that we do this with respect for those people and their knowledge. And I do think that there's the right and wrong ways to do this. And, you know, maybe to shout out another group is like, Runway does this really well. You know, Runway is, you know, to call them a video generation company would be disrespectful. I think, you know, they, they would probably describe themselves as creating a new camera, but they do this really well. They work with the filmmaking community and they hold, you know, film festivals and they are deeply engaged and respectful and they love the craft of the thing and they're trying to build better tools for those people. And that's a lovely way to be.
Ben Wiedmaier [00:32:19]:
Yeah. Jess, you've been. We've talked a bit about time here. Even before we started recording, we were joking about time. What has changed the most in even your decade? Plus, as you've alluded to, a lot has changed. What has changed the most in researching AI man?
Jess Holbrook [00:32:36]:
There's a few things. I mean, in the 2014 era, it was the, oh my God, it works. This can't actually what. It's working. I mean, not to rehash all the details, but when you start to do the supervised learning and learning entities from unstructured data, and you know, you have these great presentations like the unreasonable effectiveness of data, and you're just like, there's this kind of this moment of like, oh my God, it's gonna, it's gonna work like, I can't believe this is gonna work. And then, you know, and so we got so excited and we were doing all these things and then you learn the reality of it, right? The precision and recall trade offs, accuracy challenges, all this. And then it goes into this crazy hype cycle that we kind of are keep going in. Like we kind of hyped up and I thought we were kind of plateauing and then like Chat GPT comes out and it's like, oh, here we go.
Jess Holbrook [00:33:27]:
And you know, there's all these hype cycles. And so that then distorts a lot of people's grounding and like what the reality of the experience is, what it's actually capable of, both for end users and you know, just for people trying to deploy the apps or the solutions. You know, then we have really the rise of responsible AI and people sounding the, the alarm bells and saying, hey, hey, hey, hey. Like I remember I was at, you know, the first workshop that I'm aware of on machine learning fairness, you know, while we were at Google and like we, you know, kind of got together for a day and you know, I think I was pretty naive, you know, I think some of us were like, okay, cool, we'll solve the fairness issues like next quarter and then we'll be done and we'll move on. And you know, it's obviously way more deeply challenging than that. And then I think from there though, everyone's evolved from this. We want to try to predict every possible potential risk we can before launching to there will be trade offs. How do we understand and mitigate as many potential risks as we can beforehand, but then weigh those against the potential trade offs? And one of the challenges with AI is a lot of times the potential risks are just bigger than the potential rewards and that's a bad trade.
Jess Holbrook [00:34:50]:
You know, like why, why would you do that? But I've seen a lot of the industry really move to these trade off mentalities and then ever since, you know, ChatGPT broke us out of stasis like while things were moving fast and a lot was happening, we had amazing breakthroughs like AlphaGo and you have products that were using AI that were wildly successful like Meta's recommendation systems or Google Photos or whatever it might be. ChatGPT just broke us out of stasis and everyone has just been in a sprint to see who can get to the platform shift first ever since then. A second thing is I kind of mentioned earlier is everyone's an AI researcher now and I always say it's kind of funny, it's kind of like, it's like having a favorite band and then it get gets really popular because there's part of you that's like, yeah, that's awesome, I love it. And there's part of you that's kind of like, oh man. Yeah, well, this, it's over.
Erin May [00:35:44]:
Yeah.
Jess Holbrook [00:35:45]:
I mean, there's part of you that's like, oh, it was so fun when it was kind of smaller, you know, and like, whatever, I'm a late comer for a lot of people, so a lot of people would be like, might say, oh, it was a lot more fun before you even got here, Jess. But yeah, another, you know, the other point I made is like, at any point predicting two years out, I would have been wrong, dead wrong. And then I think the last point I'll make is I'm still surprised. I don't think user researchers and UX more broadly has embraced the tools as much as I thought we would have by now. And you know, some of them, it's because they're not delivering value, but some of it's, I, I don't know. You know, are we, are we too hesitant? Should we actually be incorporating AI more into our work than we are? I mean, this seems to be a path for just about every profession. And so I think that we'll see a lot of work. Much of our work will be AI augmented.
Jess Holbrook [00:36:37]:
And then I think that there will be even higher premium on the UNAI augmented work. The people willing to go and do the face to face conversations one on one with people to get insights that quite literally can't be found any other way.
Erin May [00:36:53]:
Awesome. Yeah. Are there particular AI tools or use cases that you would like to be using more or would like to see researchers use more?
Jess Holbrook [00:37:01]:
Yeah, I think that, you know, there's an augmentation set that I like a lot and there's, you know, products out there like, I don't know, like, like outsut AI or something like that that are really like helping you expand and use surveys in really interesting ways. You know, they're, you know, we've always had like big data and thick data and I think that, I think that they're like toying with a third thing there that is, that's like, it's kind of qualitative at scale. I don't think we have like a good name for that or realizing that yet. You know, I love all the efficiency gains that I get, you know, from these tools. Although I've had this thought and I've shared it with other researchers and I get a lot of head nods where, like, I don't want an AI to go through my notes and do a thematic analysis because that's when I figure out what I think.
Ben Wiedmaier [00:37:50]:
Right. Yeah.
Jess Holbrook [00:37:51]:
And so if you, if you, this gets back to the identity Dignity part.
Erin May [00:37:54]:
I'm like, right, People like their jobs, they like working. That's what you like?
Jess Holbrook [00:37:58]:
Yeah. It's funny, you know, as. As I've. My career has gone on and my title has changed, people will be like, oh, we can't ask Jess to code responses. I'm like, I love coding responses. I love coding responses. I love going through verbatims and trying to figure out what's happening there. Like, that's why I'm.
Jess Holbrook [00:38:16]:
That's why I do this. I'd go do something else if I didn't want to do that. And so I love that. But when I reflect on, like, what is the biggest gain an AI could give me? It's not like helping me do. It's not the efficiency thing. It's not helping me do what I already do faster. I was thinking about it because I was thinking about what is the biggest value that a more senior researcher gives a more junior researcher, and that's that they help them ask better questions. And that's what I want AI to do.
Jess Holbrook [00:38:45]:
That's. That's how it could really shift my value and what I'm capable of. Because, you know, when you. When you're a junior researcher and you go to a more senior researcher and you're like, hey, I'm doing this study. I'd love some feedback. So I'm going to kind of like ask this and whatever. Sometimes they'll give you feedback on method or they'll. Something like that.
Jess Holbrook [00:39:05]:
But the really senior folks go, I don't think you want to ask that. I think you want to ask this other thing. And then you. And then you have that moment where you're like, you're right. That is actually what I want to ask. And that's what I want from AI. I want it to be the, like, more senior researcher. To me, that helps me.
Jess Holbrook [00:39:24]:
Break me out of my frame or break me out of my, you know, the box that I can't see in the question.
Erin May [00:39:29]:
I'm asking more of a mentor than an intern, which is what you always hear, right? Like a tie is like a bad intern. But you're saying maybe we aren't asking enough out of our AI, Right, Jess.
Ben Wiedmaier [00:39:39]:
It reminds me of, like, a dissertation or thesis defense where so much of your. Or like, your prospectus is. You're not. I mean, you might defend your method, but you're defending your questions. Defending your question set or maybe your framework, your rationale for pursuing the thing you find interesting or worth investigating.
Jess Holbrook [00:39:55]:
Yeah, it's. Of all the things you could have asked why these things? Why these questions?
Erin May [00:40:01]:
Yeah, I guess future looking, you know, you talked about trade offs and, you know, weighing upsides and downsides. What are you most nervous or excited about when you think about how this is all going to play out and research's role in it?
Jess Holbrook [00:40:15]:
Yeah, I mean I'm, you know, terminally optimistic or whatever you want to call it. So I generally think, you know, again I'm, I know that there's a lot going on, but if you look at the data like most things have gotten better and I do believe that things will be on a, on a long. I'm not saying it's going to be a straight line, but I do think things arc towards a, towards a better thing or a better way. I'm very optimistic for AI to really help with human augmentation, to give us tools for thought and for us to really, you know, extend our minds into new places and be deeper thinkers, better thinkers get more done, ask better questions and we'll all get a bunch of efficiency gains along the way. I think that's really good. The things I worry about, you know, I think that there's a lot of risks. I think, you know, there's all the usual AI risks. I worry about consolidation, I worry about these abilities not being spread around and everyone having them.
Jess Holbrook [00:41:18]:
You know, I want there to be lots of startups. I'm very. When I was at Meta, one of the things I really loved was the open source approach. I think all the companies will have something along those lines, you know, at some point. I think that that's pretty key. But I think like any technology, I mean my biggest worry is who's going to control it and what are their motivations. And so the best way to build infrastructure against that is through things like open source and accessibility and ways to decentralize that control at all times.
Erin May [00:41:52]:
Yeah, and I thought originally we were maybe talking about on the user employee side too, of spreading the wealth in terms of who gets to control the future, of being the users of agents versus displaced by agents AI. Right. Hopefully that is spread around as well.
Jess Holbrook [00:42:09]:
Yeah, I am a believer of that. Well, this goes back to like old cybernetics, things where we make work robotic and then are shocked when robots take it over. And so this is where things that are over processed or broken down into tiny units and that are repeatable and don't use imagination and don't use intuition and don't use discernment and decision making are easily automatable because we made them that way. And so anything we do that to, we should expect to be automated at some point. I do believe our work will shift to something else. And generally there's always been something better. But I do think that there will be a shift in that as some parts of the job are automated, but I hope more of them are augmented in the ways that we've talked about.
Erin May [00:42:57]:
All right, closing down with our rapid fire section, what are a couple of your favorite interview questions?
Jess Holbrook [00:43:04]:
I wasn't sure actually with this one, so I have. I want to give you three, depending on who I'm asking.
Erin May [00:43:10]:
Great.
Jess Holbrook [00:43:10]:
Okay. So if I'm asking a participant's a question, by far the favorite question is, what is your favorite blank that you've ever made? Taken, created, et cetera. So I've been in projects. I mean, it could be like an app somebody made. It could be a photograph, photograph somebody took. Ask them about something they're proud about and they light up. They can't wait to show you. And you learn so much about, again, identity.
Jess Holbrook [00:43:34]:
What are they proud about? Why are they proud of this thing that they made? It's going to give you like, gold into their identity and their motivations. If I'm talking to interview candidates like, so somebody who wants to join our team, one of my favorites is I ask them, what are you talking to your mentors about right now? And I love the question because it's loaded, because I'm assuming that they are talking, but it's me trying to get at, are you really comfortable? Like, are you self, aware, knowing where you need to grow? Have you connected with other people? So do you have the ability to not just be self aware, but to have the ability to look not in power to another person, to come to somebody with humility and are you actively doing it? And so the best, best responses to that are always, you know. Yep. Here's this thing I've been working on for a while with these people. And But a lot of people don't have anything. And then if it's I'm talking to like a direct report or somebody who reports to me, it's a simple one by ask, what, what can I do better? And I do not ask, is there anything I can do better? Or is there anything that could be better? Or is there anything we could do better? Because I usually talk, I usually speak and we about the team regardless. But I just say, what can I do better? There's something, I got a list, I'm working on my list, but I got a list. So what can I do better?
Ben Wiedmaier [00:44:55]:
How about resources that you would recommend? You mentioned a few of them. Are there others that stick out either about AI or human centered thinking or research generally?
Jess Holbrook [00:45:04]:
Yeah, I'll give two that are in the main discourse and I'm going to try to push people to a couple others. So in the main discourse, you know, I still believe in the people plus AI guidebook. It originally came out in 2019, pretty heavily updated in 2021. I revisited often and I think that we got a lot of useful things right that hold. I also really have enjoyed since being at Microsoft the hacks toolkit which is a similar look at like human centered AI and approach. And it's very like actionable. There's lots of toolkit y things in there. So those are my two kind of main more mainstream ones.
Jess Holbrook [00:45:40]:
Definitely go check those out. The other ones I want to suggest are going to be less about research and more about invention and thinking about what we're doing here. And so I want to push people a little bit. One of those is Matt Webb's blog, Interconnected. Matt is a really, you know, somebody I learn from every time I get the chance to talk to and see. He has a great new company called Axe Not Facts and he really likes to build to think and I think more researchers we default to I need to read and absorb and everything to think and figure things out. And I think we could probably do a bit more building to think. And so that's one of them.
Jess Holbrook [00:46:20]:
Another one is the. The work in the blog by Maggie Appleton. So Maggie, again is, is a. Is a really interesting thinker in the space of design and AI and anthropology and programming and all these things. And the reason I recommend her is, you know, I think her ideas are very thoughtful and original. But also I love how she blends these fields and isn't afraid to. And I think research could do with a little bit more blending, a little bit more embracing of the adjacency and getting our thoughts out in little multimedia ways. And then the last one I'll recommend is a newsletter called the Pessimists Archive and it's a great little substack and it basically just shows newspapers, articles or announcements on almost every technology we take for granted today or use happily all the pessimistic reactions to it when it first came out and how many of them seem kind of absurd and that's a little bit of a reminder of there will always be pessimists.
Jess Holbrook [00:47:22]:
I believe you should listen to them. I think you should listen to them and understand. But know that they're not always going to be right. And know that pessimism will always sound smarter than optimism. But that doesn't mean it's the right. That doesn't mean it's right.
Erin May [00:47:35]:
Love that. I think those are all new ones, so that's great. And we will link all of this in the show notes. Last one. Where can folks find you? Are you on LinkedIn or any platforms?
Jess Holbrook [00:47:47]:
Yeah, I need to be more on. I don't know if I need to be more online or not.
Erin May [00:47:51]:
You don't need to do anything, Jessica. You do you?
Jess Holbrook [00:47:54]:
Yeah, yeah, yeah, Yeah. I think LinkedIn is the big one. I don't go to X anymore, but I really haven't found another place. I don't really post on threads or anything else. So yeah, probably LinkedIn. It's probably a good one.
Erin May [00:48:06]:
We'll link that too. Well, Jess, thank you so much for being with us. I learned a lot. This was a lot of fun and have a great weekend. It's Friday.
Ben Wiedmaier [00:48:14]:
Thanks Jess.
Jess Holbrook [00:48:15]:
Awesome. Thank you both. I really appreciate it. It was fun. Have a good weekend.
Erin May [00:48:25]:
Thanks for listening to Awkward Silences brought to you by User Interviews Theme music by Fragile Gang hi there Awkward Silences listener. Thanks for listening. If you like what you heard, we always appreciate a rating or or review on your podcast app of choice.
Carol Guest [00:48:50]:
We'd also love to hear from you with feedback, guest topics or ideas so that we can improve your podcast listening experience. We're running a quick survey so you can share your thoughts on what you like about the show, which episodes you like best, which subjects you'd like to hear more about, which stuff you're sick of, and more just about you, the fans that have kept us on the air for the past five years.
Erin May [00:49:08]:
We know surveys usually suck. See episode 21 with Eric Hall for more on that. But this one's quick and useful, we promise. Thanks for helping us make this the best podcast it can be. You can find the survey link in the episode description of any episode or head on over to userinterviews.com awkwardsurvey.