#59 - Ethical Hacking, Information Security, and UX with Ted Harrington of ISE
E59

#59 - Ethical Hacking, Information Security, and UX with Ted Harrington of ISE

Erin: [00:00:31] Hello, everybody and welcome back to Awkward Silences. Today we're here with Ted Harrington, the executive partner at I S E that is independent security evaluators, and he's an InfoSec expert. He's the author of hackable, which is an Amazon number one bestseller. And the book is about why software gets hacked and what you can do about it.
Now, why are we talking about this on Awkward Silences? We're going to talk about hacking and preventing hacking in the context of UX and broadening your understanding on what might lead to such security breaches. So thank you so much, Ted .We've got JH here too.
JH: [00:01:17] Yeah. Start with the important stuff I noticed in your LinkedIn URL. It says security Ted, and I just needed to know, does anyone call you security Ted? Can I call you security Ted? Or is that just a URL trick?
Ted: [00:01:27] You could call me security, Ted, if you want, go for it. Yeah. I mean, let's do it.
JH: [00:01:32] Cool.
Erin: [00:01:33] SecTed for short. Amazing. So let's start from the top. You know, one of the things that UX researchers try to uncover a lot in their work is unsaid or unacknowledged assumptions. And I know that's really important to the work that you do when you think about. Where might some security vulnerabilities exist in what we're building.
So talk to us a little bit about what that process looks like and the work that you do.
Ted: [00:02:04] Yeah, absolutely. This is an important topic that really isn't discussed enough in the security community in, in my estimation.
But what it means is that the places and the reasons why systems ultimately get broken is because of bad assumptions. So when anyone's building anything, which is essentially the core of UX, right you're trying to build something and trying to understand how someone is going to interact with a thing in a certain way.
And you want to of course, make that as easy and pleasant and effective of a process as possible. You're starting to make some assumptions about what the user will do or won't do or will want to do, or won't want to do. And what I've seen over the years of leading ethical hackers, which is, that's what I do.
I'm a leader of ethical hackers. We work with companies who are building things and trying to understand the flaws that they have in their systems. And we're always looking at, we're always trying to understand what do they think the user will do? And I'll, I can tell you a phrase that it's going to sound like I'm making it up, but I'm not making this up.
I hear this all the time. People will be meeting with a company, whether it's our, one of our existing customers or maybe a new customer or prospective customer. And we'll be thinking about different attacks scenarios. And we'll say some version of, well, what if an attacker did X? And the response will be some version of. Oh, no one would think to do X and we're sitting here, like I literally just did and asked you about it. Yeah, exactly.
And when people say, you know, that, that perspective of, Oh, no one would think that they're not being cheeky or coy or trying to be funny, they genuinely think like, no an attacker, I mean, a user wouldn't wouldn't do that. And the common examples are things like let's say that a certain input field requires 20 alphanumeric characters. Well, what if I put in a command? Like, if you put in that command, does it actually get processed by the system to issue the command?
And a lot of cases, the answer is yes. And those are the kinds of things where people are like, we're building this for real estate professionals who want to make listing of, for sale properties easier. Why would they ever do that?
It's like, well, what if they did? And that's what people really need to be thinking about is sort of that malicious understanding of the assumptions that are made about how a system is intended to be operated with, because what attackers do is they find they identify those assumptions and then they poke at them and they say, well, what if something different happened?
JH: [00:04:44] Just to back up a little, like when you talk about security, you know, the first thing that comes to mind for me is somebody in, you know, a terminal with the command line hacking in and stealing data or something like that. Right? Like that feels very kind of obvious to be security-related and preventing that.
But what does it also include? Like people. You know, challenging assumptions to do things that are more like abusive with a piece of software or like a platform, like where it's like harassment of other users or stuff like that, or is that outside of what you would consider like security in your focus area?
Ted: [00:05:13] I love it. It's such a good question. I love the way you're asking it, because that is itself. One of the common misconceptions about security people think, Oh, security is just about not getting data stolen and yeah, that's an important part. No doubt. But security is so much more than that, including all of the scenarios that you just described.
And what we need to think about is motivation. Different attackers are motivated to achieve different results. And to me that feels really straightforward, but you'd be surprised actually. How many organizations think of attackers or hackers or putting in air quotes? We can talk about what hackers really are in a minute, but you know, people think about these attacker groups as if they're one idea, right?
They attack systems in order to make money. And that is a motivation for some attacker types, but there are plenty of motivations. So you talked about the idea of using a platform to harass somebody that sure. If someone has some sort of cause that they want to advocate for, there's an attacker group or a category of attackers, that's called hacktivists.
And these are attackers that attack in order to advocate for some sort of cause. And it's exactly what you said. Yeah.
Another motivation is the explore mindset. Like someone who is curious and wants to just prove that they can do it. So there's a, an attacker group that's known as the casual hacker or the small group hacker. And basically they want to prove they could do it right there.
It's an exploration, it's a challenge. Sometimes it's even just doing a prank, right? Like there's an example that happened in San Francisco a few years ago where you know, those. Digital construction signs. They're orange along the side of the freeway, when there's road work being done or whatever S a group or an unidentified group or individual attacked those.
So they changed the message. And it said, instead of saying like, you know, this exit is closed or whatever said, it said, "Godzilla attack, turn back" and that's certainly , Godzilla certainly wasn't attacking, you know, that was a different kind of attack. And that was someone who was just, they wanted to do it.
They wanted to see if they could do it and they proved that they could do it.
Erin: [00:07:31] Yeah, this feels like that's like the hackers hacker, I feel like, right. That's like for the love of the hack, you know, just like I did it because I could,
Ted: [00:07:40] for the love of the hack,
Erin: [00:07:41] for the love of
Ted: [00:07:42] go make a t-shirt that says that
JH: [00:07:46] You mentioned starting with the assumptions and like, you know, users wouldn't do this, or however you phrased it. When you are hired by a company to come in and help them identify. No gaps in their security. Like, what does that look like? Do you come in and look at it with fresh eyes and play with the software yourself to try to understand what assumptions they might be making.
Do you try to interview the people who made the software to see what assumptions they made it made, you know, talk to users, like how do you actually go about like, kind of collecting that initial information to start getting the wheels, turning of like where there might be weakness or opportunities to kind of get in there.
Ted: [00:08:18] Yeah. There's a few ways that organizations like ours work with companies. There's a right way and there's a wrong way. The wrong way. What's called black box testing and black box testing basically says that's where a company will say, all right, well, I'm gonna hire this outside organization.
Helped me find how I might get hacked and well, you know, my attackers don't have any information. So I'm not going to give this company. I just hired any information because I want them to emulate real world attack conditions. That's the thinking behind black box testing. Unfortunately it doesn't actually deliver real-world conditions.
All that it does is hamstring the company that they've now hired. So it's like, you know, if someone were to hire me to say, Hey, can you break into this thing? And then tell me nothing about it. The metaphor would be like, you go to your doctor. And you say to your doctor Hey, I don't feel that good. And the doctor says, okay, tell me your symptoms.
And you're like no. You're the expert. You better figure it out. I mean, it's, that's literally what is happening when you're withholding information from the expert you're hiring to help you. So that's the wrong way. Now the right way is what's called white box. So obviously drawn in sharp contrast from black box testing.
So a white box methodology is all of the things that were woven into your question, which is that at the beginning of a project with a company, we actually sit down with them and we'll ask them questions about who is this for? You know, what business problem are you solving by creating this system?
Why does it exist by understanding those things? We're able to understand what things of value the system has to protect. And once we understand the things of value, the system has to protect, that helps us think about, well, what types of attackers would want to attack this type of system based on their respective?
Motivations. We talked about some of the motivations already. We certainly didn't talk about all the different motivations. It's a lengthy list. Once we have gone through that process of thinking about what you need to protect? Who are you worried about getting attacked by then? We started thinking about it.
Where will these attacks be launched? And so that's where we're working with these companies to really understand the architecture. What are the different components? Where are input fields? How do users interact with the system? What's the intended use of the system? And then once we have all that information, now that's when we start applying that more malicious viewpoint and we'll look at, we'll look at the system and say, okay, well, You know this feature is supposed to do X, but if we do this certain series of things, we can maybe make it do Y and then we'll explore whether or not we can make Y happen.
And if we can, then we determine how severe is that? How significant of an impact is that against. The things that are important and matter to this company that is trying to protect the things they're trying to protect. And then we'll come back to the customer and we'll say, okay, here are all the issues that we found here is an example of how an attacker might actually execute this exploit.
And here's how severe it is, so that you can understand how to prioritize, fixing the issues, because you're going to have a bunch and you need to know where do you start? What's first? And then we'll tell them how to fix it. And we'll say, we'll recommend it. They'll wind up actually making the fixes.
But we'll say, you know, if you this issue is solved either in this way, by making this type of change or in this way, by making this type of change. And then they make the change and then we come back and we say, Good. You fixed it. The problem is solved or you didn't quite fix it. Let's make this subsequent change in order to fully solve it.
And so then the benefit of course is the outcome of that is that the company now knows exactly what to do. They know exactly how to do it. They have confidence they're doing it right. They're able to say here's exactly what we did. Here's what the problems are.
Here's how we're fixing it. And now you can trust working with us.
Erin: [00:13:08] You talked a little bit. What about you go in and you're talking with a client about these things of value that the system needs to protect. And how do users interact with your product? I'm curious on this sort of UX research side, how do you. How do you start to pull back the layers on how users do in reality, you know, interact with the product today and then get into that hypothetical mindset of how might they choose to maliciously you know, actually interact with the product.
How do you start to get that information in terms of users and how they're interacting with the product?
Ted: [00:13:49] Yeah, that's a great question. There's. A number of what are called secure design principles or principles of secure design. These are essentially the sort of universal truths about how you build a system that's resilient against attack. And one of those principles is called psychological acceptability.
And I wouldn't actually be surprised if that principal also goes by the same name or maybe a different name in. In the UX field, but essentially what psychological acceptability means is that the security functionality must not be so cumbersome to the user that they'll circumvent it. So the sort of classic enterprise example of psychological acceptability was when you know, years ago, when these systems first became available to help you securely transfer files.
So where, you know, a person has something that maybe financial data or whatever, and they need to send it to person B and someone said, well, let's come up with a secure way to do that. And so they built these systems and the systems wound up being kind of hard to use. And it was a departure from the way that people were actively already doing their work.
And what was happening was despite the heavy investment in these systems, the users were just emailing these sensitive files to each other. And that was the entire problem that these systems were built to avoid because sending an email means, you know, people can forward it. Email has certainly some deficiencies to it from a security standpoint.
It's not totally rubbish, but there are some issues to consider. And these, by contrast, these file transfer systems were built specifically. To enable the secure transfer of files from point person to person, me and psychological acceptability was on full display there because you saw users saying, well, that is difficult for me.
Erin: [00:15:48] Got it. Okay. Interesting.
And then another thing you have been talking about a little bit that I'm curious about is there's malicious intent, right? So we need to have creative imagination about what bad actors might cook up, you know, in their spare time to do us harm. But then there's like these accidental breaches of what might users who are not necessarily trying to abuse our system, but just are users of an imperfect system.
What kind of trouble might, is that something that you're screening for as well, sort of accidental hacking or not hacking, right, but accidental security issues. Yes.
Ted: [00:16:26] Yeah, a hundred percent. So when you think about attackers, let's imagine a tree almost like a tree almost. So the first fork in the tree is there's external attackers and there's internal attackers. And the difference between the two comes down to these two conditions, elevated trust and elevated access. So for someone who has either or both of those conditions, they are an insider.
Now, most people think that an employee or a user is an insider and in most, in pretty much all cases, that's true, but it's not to say that all insiders must be employees. Insiders could be anybody who has that sort of extra access. So it could be, I mean, certainly employees, but it could be consultants or third parties that you trust or who themselves have access to any sort of third party stuff that you integrate with could even be members of your family, Emily, or members of the board of directors for the company.
These all become insiders because they have that, those elevated conditions. Now each of these two groups, external attackers and internal attackers, then have their own respective forks where there's a few different types of each. But within insiders, the four categories are accidental insiders.
There are the opportunistic insiders there's disgruntled insiders and there's determined malicious insiders. And the difference between each of those also comes down to motivation and you hit on the accidental insider, right? The person who accidentally does something and. That type of attack is in a way, somewhat unique in that the accidental insider is also a victim, right?
They're the ones who click the link. They shouldn't have clicked or downloaded the attachment. They shouldn't have downloaded. Whereas all of the other ones, the opportunists they're actively taking action disgruntled they're actively taking action. Same with the determined malicious insider.
But when we're thinking about it. How can an attack, how could a system be attacked? We're considering all these different attackers, all their different motivations and then different techniques for how you would deal with each. So for example, one thing that is effective against accidental insiders, or is somewhat effective against XL insiders is training where you say you're not supposed to do blank.
And they say, Oh, I didn't know that. Thanks, now I won't do blank. And now whether or not they actually change behavior, that's usually up to debate. But the point is that same message to a determined malicious insider. Now that type of attacker joins the company expressly to hurt. It doesn't matter how much you train them.
When you say to them, Hey, don't do blank. They're gonna be like, yeah, no, I'm outta here to do it anyway.
JH: [00:19:16] Yeah the, and the accidental one, the thing that comes to mind, and this goes back to somebody we had on the podcast a long time ago, Erin, I think it was a cat like me, the second guest I was talking about, she worked in the financial industry and then she framed it as like the experience effect or whatever, where, when they would go out and do research with people, everyone was kinda just like apathetic of like, yeah, I dunno, like my social security number and other stuff probably online somewhere.
Like everything gets hacked and they like, you know, that's in more of a consumer setting, but. It seems like part of the training challenge, like on the insider stuff, in, in a business context or whatever it may be, is getting people to like, you know, care, believe, or like, understand that some of this stuff is still preventable or matters.
I feel like there's this kind of like, I don't know if you see that or if that's a real part of it or not, it just feels like it's a tough dynamic to figure out.
Ted: [00:20:03] It sure is. My, I am, I have a good friend who is in the research department at MIT, and she was talking about some of the research that she's doing about how do you change or create a culture around cybersecurity and. The point that she was making that once she said it. So simply I was like, Oh my God, yes.
You just distilled down, you know, years and years of security research into the simple idea that it won't no change will happen until you can change values and behaviors and attitudes. And you won't do those things unless you make it matter to the person. Right. So the individual might be like, okay, well, Yeah, I guess we don't want the company's stuff to get stolen, but when it's like, Hey, if the company stuff gets stolen, you get fired or the company goes out of business and you lose your job or we aren't profitable.
So you don't get that bonus that you've been counting on. Now, all of a sudden people were like, okay, hold on. I'm listening. And it's, it really changes the way that people think about it. So that's kind of one of the big things. In the community who are really focusing on training are narrowing in on how do we actually change behavior?
How do we make it important to people? Because the truth is, and here's the reality. So there's a, and there's a distinction. There's a discrepancy, I should say, between the reality. And what's actually happening the way people think the reality is security is everyone's job, but the way that most people think is security is someone else's job. And until we can bridge that chasm, these accidental insider security breaches, they're going to happen all the time.
Erin: [00:21:53] Yeah, absolutely.
JH: [00:21:56] like a tough one.
Cool. Maybe just for my own curiosity, but when you're out and like, you know, looking through and all the issues you see, whether it's from the clients or just in the, you know, what pops up in the news, I imagine you can keep a pretty close eye on how much of this stuff is still just like, man, people aren't even doing the basics, right.
Stuff we've known for years versus like the hackers are, keep getting more and more cutting edge and coming up with things that we never even could have imagined. Like, is there any sort of split on that? Like. I feel like the way it's portrayed and like, you know, media or whatever. It's always kind of this cutting edge cool stuff.
But like my gut is that people just aren't even doing some of the basics, right. That like we've known for a long time.
Ted: [00:22:37] It's actually both of those things, but there's also a third condition too. So there's people who aren't doing the fundamentals that they should be doing. The attackers are advancing and themselves innovating at this relentless pace. True. The third condition though, that you didn't identify in your question.
And this is the thing that kicked my butt into writing this book. When I noticed this, I said, I have to write a book. I can no longer allow this condition to exist. And it is this. I noticed that when, while I was talking to our customers or our prospective customers, they all. We're saying the same 10 things.
Now, they didn't all necessarily have the same 10 problems, but all of them had some of these same 10. And I thought that was really interesting as I thought about that. I thought, you know, that's kind of fascinating that no matter what industry they're in, what kind of client they're trying to serve, what kind of user they have, what assets they protect, they all share the same security challenges.
And then as I started thinking about how do you solve those challenges? That was the thing that kicked my butt into gear. That's this third condition. And it was the intentional solutions to those problems that were almost universally wrong. So think about that. You've got companies out here trying to change the world and whatever their way is, they're trying to change the world.
They recognize that they have a security challenge. They go seek the solution to the challenge. And the solution that they find is incorrect. That's crazy. That is when I connected those dots, I started writing my book that day. And so not just in my book though, but I mean, that's like, Being here.
That's part of what I'm trying to advocate for. The talks that I give the keynote, they give the workshops to teach our, even our customers that we're talking to that needs to change. And so I'm out there really trying to help people say, Hey, look, everyone in the world kind of things that you're supposed to do X, but it's actually why you gotta do this other thing.
And like, you know, a great example was, I mean, a lot of the things I've talked about today, for sure I'll fall into this, but even that like black box was white box. That's a great example of the way the world thinks incorrectly. And. So that's the long way of saying, you know, you're right.
That a lot of people are continuing to fail on the fundamentals. That's one thing, the second thing attackers are relentlessly innovating and advancing, but there's this third thing that the people who are out there actively trying to do security right. Are actually often being given the wrong approaches to do it.
Erin: [00:25:23] Are there any like, are there tons of flavors of that or is you mentioned the black box white box and I'm, you know, I'm sure there are potentially a lot of things, but is it yeah,
Ted: [00:25:37] Yeah, there are many certainly the way people think about sharing information, you know, black boxes, white box That's one way people even think about what security testing is, there's this common misconception that you can just buy a tool. You can push a button and the problem goes away.
And there are plenty of companies. This marketing says exactly that. Literally that. And there are issues about when you should think about security. A lot of people think security, something comes later, but that's actually a far worse way to do it. It's more expensive, it's less effective. And then even some of the things we talked about earlier about the business value, a lot of people think that business value is removing a bad thing.
And that's only a small part of it.
JH: [00:26:19] Hmm. Yeah. Yeah. It kind of feels like you know, your own personal health in the sense of. People miss some of the basics, but there's also a lot of bad conventional wisdom that people get fed and can kind of send you down the wrong path too. Cool.
Erin: [00:26:31] Ted last question. Given, this is a UX podcast. We've got lots of well-meaning UX designers, product managers, researchers listening to this, maybe not thinking a ton about security every day. What should they know in terms of creating a great UX and being security minded at the same time in 30 seconds?
Yes.
Ted: [00:26:56] You should know that. Well, certainly the bad people are out there. Right. And they are wanting to come after systems, even for reasons, you might not necessarily understand upfront, but it's okay to accept that you're not the expert at that building things and breaking things are different. And that's why.
Companies work with other companies who are experts in that. So my call to action to you would be , you know, study up on these ideas for sure of, but also accept that you don't have to become an expert in this topic area, but you should go advocate for your company to say, Hey, who are we working with?
It's going to bring that malicious mindset who can help me. Build better, more secure systems because I'm not the expert, even if I am interested in it, you know that's, it's okay to accept that because you're really good at what you do. So be good at what you do and then get the expertise that you need to compliment what you're doing.
Erin: [00:27:52] Awesome. We'll leave it at that. Thanks Ted.

Creators and Guests

Erin May
Host
Erin May
Senior VP of Marketing & Growth at User Interviews
John-Henry Forster
Host
John-Henry Forster
Former SVP of Product at User Interviews and long-time co-host (now at Skedda)
Ted Harrington
Guest
Ted Harrington
Ted Harrington is the author of HACKABLE: How to Do Application Security Right and the Executive Partner at Independent Security Evaluators (ISE), the company of ethical hackers famous for hacking cars, medical devices, and password managers. He’s helped hundreds of companies fix tens of thousands of security vulnerabilities, including Google, Amazon, Microsoft, Netflix, and more.