Elements of an Effective Insider Threat Program – 2019 AT&T Business Summit


[MUSIC PLAYING] TODD WASKELIS: Afternoon, everyone, how are you? Good, good, and so my name’s Todd Waskelis. I’m in AVP, an AT&T Cybersecurity group, and we’re going to spend some time today, as you know– because you signed up for this– talking about insider threats. You know insider threats are becoming more and more top of mind for organizations. I mean, we see stats that say 30% to higher than 45% of attacks are associated with insider threats. But I think as security professionals, we often are so focused on the external threats, right? The bad actors from the outside and protecting our perimeter, that we fail to really dig into how do we start dealing with the threats inside of our environment, not just in our employees but in our partners and others like that. I’ve got an incredible panel here with me today. I’m just going to ask from left to right. If they would take a minute and introduce themselves? DAVID MAIMON: My name is David Maimon and I’m a professor in Georgia State University and the director of the Evidence-based Cybersecurity Center. TROY WILKINSON: My name is Troy Wilkinson. I’m the head of cybersecurity research and analysis at IPG which is a New York-based Fortune 500 advertising company. JOSEPH BLANKENSHIP: Joseph Blankenship, vice president, and research director with Forrester Research. TODD WASKELIS: Awesome, we’re going to have some time at the end for Q&A so hopefully, you can cue up some questions for our experts here. But Joe, why don’t we just start off with you, Joseph, if you don’t mind. So what makes an insider threat different from other threats? JOSEPH BLANKENSHIP: Well, I think you kind of said at the top of the talk, right? Usually, when we think about threats in the concept of cybersecurity, we’re thinking external threats– external actors– and instead, an insider threat is actually the people we trust. We can get a look at everything coming from the outside and if it looks malicious, we can probably make a decision, hey, that’s likely malicious, we should block that, right? We have most of our defenses focused on the outside, and we tend to trust all of the people on the inside, which is why at Forester we have the Zero Trust framework which we don’t trust anybody including ourselves and our co-workers. So the other big difference is while you can treat an external actor as a foe, you can’t always treat your co-workers as a foe even though you might like to. You actually have to have some level of trust or some level of access to information for all the people in your company, and so you can’t necessarily just think that everyone is guilty until proven innocent, instead we’ve got to look at everybody on a case-by-case basis to actually treat them like trusted co-workers. TODD WASKELIS: So I think you brought a key point up there, right? So it’s not just about our employees, but it’s the trusted entities inside of our organization. So that could be employees, that could be partners, that could be vendors, that really could be anything across the board. So Troy, what makes it difficult from a cultural perspective to deal with insider threats? I mean, what have you seen? TROY WILKINSON: I think it’s a people problem, right? I mean, the best use case of insider threat that we all know– and from my background– is Ed Snowden, right? He moved around within certain jobs, and his privileges and accesses were never cut off, and he was able to get access to more data than he should have and walk out the door with it– one of our biggest national problems back then. But it’s all about the people being centric in what you’re trying to figure out. And so user-behavior analytics has really come to the forefront these days in determining what’s normal, what they should have access to, and kind of plotting that across time so that you understand when they’re doing something they shouldn’t. I mean, you have employees that are accessing files when they’re about to leave the company. You know they put in their notice or maybe they don’t, and they start downloading all of their work product or maybe stuff they shouldn’t so it’s all about understanding the people-centric problem with that. So with culture– as you said– you can’t treat them as hostile, but you have to use data. And so I’m always going to go back to data and science. And what I love right now, in the next two to five, 10 years, we’re really going to start relying on the data. And so the analytics– how do we apply machine learning and data to understand what people should do, shouldn’t do, what’s normal, what’s not normal, and start alerting and surfacing up the ones that come up? TODD WASKELIS: Yeah, and my follow-up there was going to be, so how do you profile your internal threat, right? How you go about doing that? But you’ve kind of touched on it with the data. TROY WILKINSON: Yeah, and so I started my career in law enforcement and was an investigator for a long time. And you know it’s always the bookkeeper who is the nice, little, old lady who’s embezzling from the company. And so when you look at profiling somebody and trying to understand who is the right person who would do this, it’s tough because what happens is perhaps they’re in a bad financial situation, they’ve hit hard times, something in their medical life has come up, you can never know. And so trying to create an overall profile of your employees to say who’s more likely or not to steal data, it doesn’t always work. And so what you can rely on that’s never wrong is data. And so as long as you have those algorithms and the things that can show you what these folks are normally doing, what they should be doing, and when they try to go outside those bounds, then you have an indicator to say, OK, let’s watch Sally you know. She’s never tried that before, let’s keep an eye on her. TODD WASKELIS: Yeah, that makes sense and I think I think we traditionally think about cybersecurity as people, process, technology, but I think historically we’ve been technology, technology, technology, process, people, right? And we’ve always sort of put that “people” last and I think we’ve really got to kind of shift to moving that upfront. And I don’t think there’s vertical that’s immune to this, right? I mean, I believe that there’s sort of these insider threats that their primary motive is they’re going to get in your organization from day one and they’re going to do something bad to you but then there’s the opportunities right? I mean, there was an individual at a social media organization that was using his access to stalk individuals inside of the users of the base, right? We see people stealing from financial institutions and rerouting money or whatever it might be. And then even in manufacturing, right? We’ve seen trade secrets going out the door, and then Snowden, federal gov, so there’s really no– I don’t think there’s anybody that’s immune to it. And again, I think there’s some opportunities for insider threats where they just take advantage of when they can, but there are definitely some malicious. Then there’s the– I think the FBI calls them the knuckleheads– which are really just the dummies. They’re just fat-finger and then they cause some problems or something that. So David, talk– DAVID MAIMON: So maybe I can add to that. I think as a criminologist we know that opportunities are very much important, right? And you know, folks’ decision-making goes back to whether they will engage in a type of crime or not. So profiling is cool and whatever, but again, maybe some of us may have a legitimate reason to actually profile people, right? At the end of the day, you profile your employees and you’re making decisions about their opportunity or the potential of them to actually harm the organization, and if that’s the case, why would you keep them there in the first place, right? I would think opportunity is the key. And I think that forget about profiling, and you talk about designing out crime. I think what should happen is, of course, focusing on the human but configure the environment, configure the systems, configure the network in a way that will nudge the offender to simply desist. And that’s what I think is missing right now and that’s I think where we should go. So profiling is OK. I mean, there’s a lot of research that indicates that maybe profiling is not the best thing to do. Maybe the right thing to do is focus on opportunities. TODD WASKELIS: Reduce the opportunity for them, right? Yeah, that makes a lot of sense. I mean, we do that now externally, right? When we think about our external adversaries, we try to patch management and do all of those things to reduce the opportunity, but we don’t look enough inside. So David, evidence-based security, can you take a few minutes and tell us about that as a program and how does that apply to the insider threat? DAVID MAIMON: Sure, so evidence-based cybersecurity is essentially the approach that suggests that we need to move from a model in which we make decisions based on our personal background, our emotions, our experience in the field, into a model that essentially asks folks to make decisions based on scientific data and evidence. The focus of the approach is in the human and it calls for the implementation of rigorous scientific methods like field experiments, like surveys, like observations. And I assume that folks sitting in this room are aware of the fact that this approach actually worked in many, many fields like the medical field. If you think about mortality rates among kids, among pregnant women, and so on, we see that because of the evidence-based approach we were able to reduce mortality rates. The same thing with respect to policing. You know before the 90s crime rates were very high in the United States, but starting the 90s we see more and more police departments across the nation adopting this evidence-based approach in order to really understand what works and what doesn’t and in a scientific way and then we see a reduction in crime rates in our society. So we believe that this approach is relevant also in the context of cybersecurity in general when we’re trying to understand attacks and in the context of insider threats in particular– insider threat is again, it’s an umbrella term for many, many types of activities, different types of actors, so the approach calls for understanding those malicious and non-malicious actors in their network and the computers that they use. And we believe that this approach is very relevant in the context of insider threats because it allows us to answer two key questions. First, the risk that people bring with them to the organization, and second, how can we strengthen defense on the organization? Network, computers, and so on. This is pretty much what the evidence-based cybersecurity approach is all about. And we think that it’s relevant in the context of insider threats because it can answer those two key questions– which I can talk about later– how we actually can answer those. TODD WASKELIS: And so in gathering that evidence, I mean, are there some tools, technologies that you look to leverage inside of the environment that you would say, hey, start with this type of tooling if you’re looking to deal with insider threats or where would you start? DAVID MAIMON: So that’s a very good question. In the context of insider threats, we haven’t done a whole lot to deal with that. But you know the approach essentially calls for really testing tools and policies in the field. And before I came here, I actually ran a literature review sort of speaking and tried to figure out what other people have been doing with insider threats. And I don’t know if folks will be surprised but there’s really not a whole lot of– I mean, there’s a lot of research but it’s not empirical research that is evidence-based research. The goal is to actually take those tools that focus on identifying misuse or identifying anomaly and really test whether they work in the field. I’m not familiar with any research that actually does that. I know that anecdotal folks can talk a lot about the effectiveness of Splunk, or Honeypot, or other tools in actually detecting insider threats, but there’s really no scientific evidence that indicates that those tools are really worth anything. And I’m sorry, I’m a scientist so I’m not trying to sell anything so. TODD WASKELIS: I’ll switch to the other side. So from Forrester’s perspective, what are you seeing in terms of tools and technologies that organizations are looking to leverage from an insider threat? JOSEPH BLANKENSHIP: Well usually when I get a customer inquiry and their talking insider threat, the first thing they ask is what UBA should I buy, and I usually flip that question and try to turn it around and say before we start buying technology let’s try to figure out what you’re trying to solve for. And to your point, you’ve got to get the process and the people ahead of it. And this is very much a process-oriented problem as well. You know I make the analogy of what are you going to do when you catch an insider threat, right? You got one. It’s like a dog chasing a car. If the dog ever caught the car he’d be like, wow, I got this car. What am I going to do with it? If you’re an insider– if you’re a CSO or you’re a security analyst, and you catch the insider, what are you going to do? Are you going to call the manager? Are you going to get this person fired? Are you going to have a warning issued, something like that? Are you going to reduce their privileges? Are you going to do anything that’s going to land you in court later? Because if you do any of those things if it’s one person and you do something different for the next two people, you have now created liability and more risk. So you were better off letting the insider run rampant in your environment instead of actually trying to do something about it so it’s very much a process thing. What are we going to do when we catch one? What’s our process? What’s our investigative flow? And now let’s start talking about the tools to enable that. And who are the people we’re going to hire? We did some research just to figure out who makes really good insider-threat analysts– lots of the former law enforcement, counterintelligence people. People that have more of a rigorous investigative mindset, as opposed to a lot of broad technical skills make better insider-threat analysts. TROY WILKINSON: I’d just like to add to that because I think that every business is different– and this is probably going to come out later– but you have to realize what your risk assessment says and you’re doing a three-phase risk assessment– whether it’s operational risk, litigation risk, or reputation risk. And so once you get to the bottom of what’s important to me– every business is different. It might be your file structure for one business where you have a lot of important intellectual property files that you’re trying to protect. It could be e-commerce where your customer database is the most important thing to you in the world. And so once you start looking at that problem statement of what do I want to protect, then you start looking at ways to see unauthorized access. And there are tools like Varonis out there who do file-level access– you know UBA– there’s tools like ExtraHop and Darktrace that do the network layer analytics around behavior. And so when you add those things up once you identify where your risks are, what you want to protect, how you want to protect it, then you can start layering the tools on afterward and you call Joseph and say, what’s the best one for me? JOSEPH BLANKENSHIP: Absolutely. DAVID MAIMON: So there’s one thing that I think that it’s important to emphasize is that that insider attack as I indicated, is a very broad sort of term, so I 100% agree with you with respect to the CSO needs to figure out what is it that he’s going to do with someone who is an insider attacker, but we need to define that, right? JOSEPH BLANKENSHIP: Oh, absolutely. DAVID MAIMON: Because you have the malicious attacker and then you have the non-malicious attacker– that employee who accidentally clicked on a pfishing link, so you have to have different policies to those different insider threats. JOSEPH BLANKENSHIP: Right, and to me, that’s part of the process too is we’ve got to categorize this. Was this actual maliciousness? Was it carelessness– the knucklehead? The person that’s trying to get around policy because, oh, wow, this security policy is really getting in my way, so if I just download everything to my personal PC, I can just work off of this so all of that security stuff isn’t in the way– and that’s kind of the accidental insider if you will, but they’re still violating policy so you’re right. TODD WASKELIS: So it reminds me of the conversation you often have about DLP. A lot of organizations jumped into DLP and they wanted to get it deployed on their network to see what was going on, but to your point, you’ve got to be careful of what you see because then you’ve got to act on it, right? Sometimes maybe it’s better not knowing, right? But once you have all of that information you’ve got to do something. Troy, could you just double-click a little bit into that building a risk program. I think that that would be helpful for me for some of our people to understand where do you start with that? How do you begin and how do I– if I’m looking at my business from a business owner perspective– how do I start thinking about that? TROY WILKINSON: It’s funny because most people assume that the risk assessment should come from the CSOs office and I postulate that it should come from the CEO and the board. And they should mandate that a risk assessment should include all business units and all business heads because the risk that we’re going to find is usually going to be somebody in accounting, or HR, or somebody who is an accidental insider threat but also you have to have that culture of security. And that’s so important that you get buy-in from the executive level because if we as CSOs or executive directors in security create and mandate then it’s not going to be followed, but if it’s from the top down, and the board level down, and it’s a revenue-impacting and a bonus-structure type buy-in, then you’re going to get people to cooperate and do that. So the risk assessments again, go into how is this going to impact the company? You can lay out whether you’re compliant to PCI, or whether you’re compliant to HIPAA, SOX– Sarbanes-Oxley– whatever you are compliant to along those three risk levels then you can kind of marry that up. So in our case, we basically look at IPG from the NIST 12 pillars, and then we map that to the CIS top 100 controls, and then we use that to also test ourselves against the MITRE ATT&CK framework. So by using these frameworks to actually test the controls we have in place and the effectiveness thereof, then we can grade ourselves and get better. It also helps prepare our roadmap for 2020, 2021, as we’re looking to buy new technology so the risk assessment has to come from a broader level. And to Joseph’s point– and this is the foundational part is– it has to be in policy in writing, it has to be signed, it has to be part of that initial hiring package, it has to be annually reviewed and signed again. We have to make sure that our employees are aware of this program because if we don’t, then they’re caught in the mousetrap and they weren’t even aware that that was wrong in the first place. DAVID MAIMON: So maybe in the context of risk assessment, one of the things that it’s important to emphasize that it’s not only tools that we should take into consideration when we are trying to deal with insider threat but also policies. I mean, many organizations talk about they have policies that essentially penalize employees for clicking on the link, right? Reducing their paycheck or I’ve actually heard of some organization that will get you fired if you clicked on the link. Now, the question is whether those policies are effective or not and again, like the tools, we still don’t know so. TODD WASKELIS: You know we’ve said it three times already now and it reminds me of I knew a guy a long time ago who ran a bar and the way that he would train as bartenders to be bartenders, one of the parts of the training program was to teach them every way that they could steal from him, right? He would walk them through, hey, if you do this I’m going to catch you, if you do this I’m going to catch you, and so he felt that he sort of reduced his exposure by being open and honest, right? And I think we do that a little bit maybe with our employee awareness and training programs to some degree, but the thing that’s been mentioned over and over is culture, right? I mean, so how do you instill that? How do you shift the culture of an organization? I’ll give you all a chance to talk on this because I know that a lot of individuals feel like this is great, but how do I take this back to my organization and put it into effect? So how do you try to drive that in the culture, Joseph? JOSEPH BLANKENSHIP: Yeah, It’s actually funny because we’re actually doing some research right now that talks about this. It talks about how does the CSO get out of the security team’s area wherever that is– usually theoretically in a basement someplace where it’s dark everyone’s wearing hoodies and whatever– and actually get out into the rest of the organization and demonstrate that security is more of a business enabler. And I think this whole concept of security is we’ve always been kind of the Department of No, and we’re the people that tell you how long your password has to be and you have to change it every 90 days, and just really ruin your lives with all this kind of stuff. But instead of having that sort of mindset, it’s really about the CSO going and demonstrating– to your point about risk– here’s the real liability that resides out there. A lot of these insider-threat programs get started because they had an incident or maybe a board member read about a peer company that had an incident so they realize there’s an incident. There’s a possibility here. So then the sort of security’s role to come in and educate and really start talking about, how do we do this culture-wide? And insider threat is probably that one problem– as well as the phishing problem that David was talking about– if you don’t have that culture of security, you will never fix the clicking-on-things problem. You will never get to the point that your people are aware. And there was a great case with a manufacturer in North Carolina– I’m sorry I’m taking too much time– but they had a policy on their campus “no removable storage media,” right? Nowhere on the campus because they had corporate secrets they wanted to protect. Employee is walking along, looks down, sees an SD card laying on the sidewalk, and he’s like, wow, that shouldn’t be here. Why is that removable media here? Now, most anybody in this room would say, whoa, wow, free SD card, I will go put that on my computer. This guy was smart. He actually took it to corporate security and said I found this on the ground, it shouldn’t be here, find out what it is. Corporate security gets in there any fine gig after gig of data they did not want getting out and so now it’s about who did this and why, right? So they do some evidence collection. They figure out what machine it came from. They monitor the user of that machine for a period of time, and they ended up calling law enforcement to having that person arrested but that’s cultural awareness. That’s not supposed to be here, here’s what to do with it, right? TROY WILKINSON: Hashtag no hoodie? But we could have a two-hour session on culture alone because it’s such an important topic to people these days and I think it comes from just emphasizing that along all levels. And so what I’ve seen be successful and is rewarding people for contributing. And so we have challenge coins from the CSO’s office and if one of our 100,000 employees decides to do something outstanding– like what he just mentioned– that we recognize publicly, and they’ll be given one of our challenge coins, and they use it to like say, I helped. We also have moved to more interactive security-awareness training so instead of just click, click, yes, yes, yes, now it’s these interactive videos that are fun to watch and informative. And I think as citizens at home, we understand that cybersecurity is a thing now. We’re connected with our watches, our thermostats, and our toasters– as we talked– and so the average person who doesn’t know anything about cybersecurity is aware that this is a problem. So I think we’re seeing more buy-in from the general employees, and we’re also getting that buy-in at the board level because we have people who are extensively aware of the financial risk to our business if we get hit with something significant. DAVID MAIMON: So maybe I should– I mean, I’m sorry, and this is one of the reasons I love cybersecurity and being a professor of cybersecurity, is that I get to be the destructor in those sessions and come up with provocative questions. So as a sociologist in training, I don’t understand what a culture means in the context of cybersecurity so it would be really cool for me to understand. When you said you got to put together a two-hour session of culture, when I think of culture I think about people dressing in a certain way, people go to specific sport events, people eat specific food. In the context of security, I don’t see that. I mean, I see maybe people complying with guidelines, with policies. I don’t see culture around it. TODD WASKELIS: Do you see more of a mindset? Like how do you [INAUDIBLE] mindset? DAVID MAIMON: That’s what I see, Yeah. TROY WILKINSON: Awareness. DAVID MAIMON: And what I’m trying to think about– OK, let’s say that there is this thing that you guys sort of call a culture. How do you operationalize it? What would you put together in order to really test whether it’s effective in reducing the risk in the organization or not? I mean, I don’t know. I don’t think that you guys– maybe you guys know, and it’s not only you, right? I mean, I think it’s the security field in general. I think that in general, the cybersecurity field suffers from the fact that we don’t know how to measure things. I mean, we come up with those really weird ways to measure things using sample of three people and then we think we should sell it. Again, in the context of cybersecurity culture, I don’t know what it means so. TODD WASKELIS: We’ve never been able to measure cybersecurity, right? It’s like air-conditioning, it’s either on or it’s off, and when it’s off you don’t know about it. DAVID MAIMON: Well, I mean I think that it’s easier to measure cybersecurity– well, again, spending seven years thinking about this, so I think it’s then really trying to understand what the cybersecurity culture is all about. TROY WILKINSON: To me, I think culture is really just awareness. Bringing people from going to work to let’s say be– or an accountant, and understanding bringing that awareness level up of cybersecurity that there are threats out there that could impact you personally or the business. And so we do use the word culture interchangeably with the word awareness, but in this example that Joseph gave that this person saw the SD card, and they knew that was not supposed to be there, and they brought it to the attention of security, perhaps we have helped educate that person. We’ve increased their awareness, and now they were more aware of themselves. So when it’s a culture it’s just more of a– to me, it’s an enterprise awareness of security, and helping people be a part, and want to be on the good side of helping us– in that case, turning that in instead of taking it home or doing something else with it, just throwing it away so. DAVID MAIMON: Yeah, awareness works for me definitely more than culture. TODD WASKELIS: How do you feel about vendor agnostic? I was never a fan of that. I believe in vendors. JOSEPH BLANKENSHIP: Yeah, they’re definitely out there. TODD WASKELIS: Right, they’re there. I’ve seen them. JOSEPH BLANKENSHIP: Exactly, they call me every day. TODD WASKELIS: So as we look at the next generation of workforce that’s coming up– and I say this with authority as the father of two teenage girls and kids that have really very little awareness of their own personal information– they’re posting stuff all over social media, right? This rainbows and unicorns world of creating culture and awareness is going to become more and more difficult, right? So yeah, people and process are important, but let’s just shift to technology, and Forrester’s big on the Zero Trust model, right? So if you think about this from a technologist’s perspective, what can we do today when we look at our environment? And how do we start maybe segmenting off or doing whatever we think we need to do to start addressing the insider threat from the technology perspective? And I’m not talking about putting in more monitoring tools and those types of things, but with what we have today, what do you think somebody can do? JOSEPH BLANKENSHIP: I think the first thing that we can do is exactly what we’ve been talking about, right? Have the technical controls in place that actually reduce the threat surface. The whole idea of the concept of Zero Trust was– everybody heard the phrase crunchy exterior, chewy center? Does that sound familiar? The M&M model of security. So when Zero Trust was first conceived, we were like we’ve got this chewy center, how do we– basically we’re saying that everything that lives in this center is trusted. That means we’re going to let everyone on the panel go and access all of our file shares. They can have access to applications. We’re never going to turn their access off like Edward Snowden. As they go from one project to the other we’ll let them accumulate credentials as they go, right? So the whole concept of Zero Trust is let’s identify the individual and say this individual’s matched with this project or with this file share, and let’s continue making sure that that individual is A, who they say they are, right? And B, are not in policy violation– so we’re continually making sure that they’re supposed to be there. And let’s revoke their privileges when they’re no longer associated with something so that’s all about identity. So identity is sort of the number one challenge there. We used to say it was data, right? Because we were going segment the network around data and all this kind of stuff. Now, we segment it more around the user because we actually isolate the user and associate them with the right places to be. TODD WASKELIS: Troy? TROY WILKINSON: To continue on that, we have employees in the business who work in accounting and HR who have no need to access production servers or files, yet in a lot of our companies they can and so it’s all about segmentation. I mean, it’s something simply as turning on a switch really. Looking at your active directory structure and keeping the file shares segmented from users that don’t need to have access. Almost every risk assessment I’ve ever done in my career has been this person has– she’s the executive assistant for the CEO so she needs to have access to everything, and what happens is she’ll get the ransomware email which will encrypt everything for the company and that’s where we really need to focus. Is it can be the malicious intent of the actor internally but 99% of the time it’s going to be an accidental infection. And so we just need to look at what accesses those people have and how many third-party contractors do we have accessing our environment? How many audits that we don’t know those third parties? Target’s a great example. The HVAC company that helped that intrusion into Target that did the payment card malware. And so it’s all about access. It’s all about understanding when to terminate it and also how to monitor it. DAVID MAIMON: I think that the issue with insider and with cybercrime in general, the focus should be with the human, right? And so I agree 100% with you of the fact that we should spend more time and effort to talk about those issues. But I think that the answer is– and again, I’m not sure whether this answer and the tools are actually available nowadays– what needs to be happening is that we should all come up with the realization that it’s the human, and we all make decisions, and we need to nudge the bad people to behave in– the bad people– the malicious or non-malicious insider in the case of this panel, to comply with our policy, to mitigate the consequence of an event, to reduce potential harm even if they engage in insider attack. I think this is where the solution lies sort of speaking. The technical tools and everybody talking about AI and machine learning which again, is very fancy and I love those buzzwords, but what needs to be happening is that we need to configure computers. We need to configure networks in a way that will nudge decision-makers to behave in a predictable way– the bad guys to leave us alone and reduce the consequence of an event to the system or the organization, and the good guys– our employees– to comply with security policies and prevent from events like this to happen. TODD WASKELIS: Agreed, and I think if you’re looking at the Zero Trust, I think a lot of organizations say, well, that’s a big undertaking for me to go into an organization, you know? And we’ve helped organizations do that. That is a big undertaking. So short of that, when we talk about basic cyber hygiene in terms of patching management and vulnerability management, there’s some things you can do today in your network just to look at going through your active directory, see who– compartmentalizing your users better, right? I mean, some other examples of things that you can just do today to make a difference and hopefully reduce that opportunity for somebody to either make a mistake or do something malicious. DAVID MAIMON: Yeah, it’s all about designing out crime, right? And so we know that from the physical world– from the offline environment– so why shouldn’t we try and do the same in online space? TODD WASKELIS: So I think best for last is how does data privacy laws come into play when you’re talking about insider threat, right? And all of the– you mentioned a couple of challenges of what you’re going to do with that person once you catch it– but, David, any thoughts around how legal and privacy issues play into this? DAVID MAIMON: Well, again, as a scientist, my suggestion is for you guys to follow your legal team advice with respect to all those insider programs and the implementation of those programs. I see two major issues with some of the programs that we have out there today. First, is online privacy as you indicated, the fact that we monitor pretty much everything. But again, the legal team will say the NSA is monitoring your traffic 24/7, I mean, so you know if you don’t want them to do that just don’t go on the internet. TODD WASKELIS: Allegedly. DAVID MAIMON: Allegedly, so you know it’s pretty much the same thing with respect to organizational network I assume, right? I mean, so if you don’t agree to our terms of use, don’t use our network and then we can’t really work here, so that’s one thing that I think that’s an issue that should be discussed. The other one is the profiling. And profiling is a big thing but essentially what we do with profiling in the context of insider threat is trying to assign a score to each employee with respect to the potential harm he or she will cause to the organization or can cause to the organization in an event of an insider threat, right? Again, we can talk a lot about profiling and we actually started to– this panel, we’re talking about profiling. There are issues with profiling– legal issues with profiling. The way we do profiling nowadays is also not very good. I mean, essentially what we do is we take retrospective data, and based on that we make some predictions with respect to someone’s potential behavior in the future and based on that we assign a score so it’s problematic. Then you have another profiling approach, which is the behavioral-analysis approach that you indicated, which essentially is intuition, right? Yeah, and I hear a lot of people actually talking about the fact that they knew a lot of CSOs and their vices, so talking about the fact they knew someone will generate a problem– an insider attack– simply by talking to him or her and looking at some of the cues that she or he sent. So again, I mean, these are the key issues that I see with respect to the legal discussion around insider threat programs, and I think that legal teams should spend more and more time trying to figure those out. TROY WILKINSON: One thing on that aside from that care is that you have to understand what constitutes a breach because now with Nevada, California and all these other states that are coming up with data-privacy laws that are matching or more stronger than GDPR, and then on top of that notifying customers. Every single state– 50 states– have different breach-notification laws that require you to divulge information to the clients affected in that state based on certain criteria. So if an insider threat gets access to a customer database and does something malicious like download it or remove it from a– do you need to notify those customers? What constitutes a breach? So I think there’s a lot more challenges around that side of it as well so that we now can understand the regulation landscape that’s coming at us so fast. JOSEPH BLANKENSHIP: The flip side of this too on the monitoring, I’m very much an advocate for monitoring. But the flip side is if you’re monitoring is so heavy and overt that now your employees feel like Big Brother is constantly watching, that is actually a giant push on productivity, attitude. It would probably even encourage people to become insider threats because they’ll become disgruntled so it’s a delicate balance. And you mentioned before, Troy, it’s all about educating them that the program exists and this is why. It’s not about you, it’s about the data. It’s about the company’s data. And by the way, it’s not your data, it’s the company’s data and making that– instilling that in folks. And it’s definitely– the privacy question is definitely a bigger question in the EU than it is in the US. There’s virtually nothing that will stop you from monitoring anybody here in the US as much as you want to but as soon as you get into other GEOs it gets very much more a particular. TODD WASKELIS: Yeah, I would make a suggestion. I heard this– it was from an analyst. I’m not sure what organization it was from. JOSEPH BLANKENSHIP: If it was smart, it was me. TODD WASKELIS: It was smart so it must have been you. Takeaway for everybody I think for this session to this point, when you go back, identify that one person, and there should only be one person in your organization that can use that word breach, right? You need to make sure that that person is empowered, because when they use that word “internal” or “external,” it has a whole lot of implications, right? So think about that when you go back. We’ve got about four minutes left. Are there any questions, comments, thoughts? AUDIENCE: So you talked about monitoring, and the actual breach itself, and looking for those people, what about the behavior that causes that in the first place? How much science is being done around that? DAVID MAIMON: So it’s good that I had this literature review before. I read through a few articles before I came here. So disgruntled employees, they tend to be more likely– I mean, again, this is scientific literature which is not really scientific because it’s case studies that people know– so disgruntled employees, they’re more likely to engage in insider threats. Loners are also more likely to do that. And again, I mean, sometimes I feel uncomfortable sort of generalizing because we have different types of insider threats. You have the spies, right? And then you have the thieves who are simply there to steal your data. So the literature distinguishes between those two actors– and then another one that I’m blanking on– and each of those actors have different personality traits. I mean, the spies are doing this for money, of course, they’re trying to embed himself in the organization and usually, these guys are in higher ranks. They tend to be vice presidents of the organization. But the thief is just someone who barely makes ends meet. So again, we’re all over the place with respect to understanding these guys– who they are– based on the scientific literature that I’m familiar with. JOSEPH BLANKENSHIP: Yeah, based on the research that I’ve done, you’re exactly right, David. And kind of at that end of the spectrum those are people who are a little more sophisticated especially the spies. And believe it or not, there are actually people who are trying to get into your company to steal your intellectual property. It’s not just about PII, PHI, PCI, all that, it’s about IP. That’s the thing that’s really valuable here. That’s one thing to walk away from. It’s one of the– the only thing you’re not going to get an audit about is how are you protecting your intellectual property. But behaviorally, yeah, I think the other class is saboteur. DAVID MAIMON: Saboteur, yeah, thank you. JOSEPH BLANKENSHIP: But they just want to destroy the data, right? And it’s usually because they’re disgruntled. They’re either mad at the manager, or the company, it could be a hacktivist sort of a mindset, you know power to the people, I’m going to destroy the data, you know, hack the world, hack the planet, all that kind of stuff. So there’s all kinds of psychology that play into this, but the garden-variety, malicious insider is absolutely a thief. TODD WASKELIS: Any other questions? JOSEPH BLANKENSHIP: We got one right here. TODD WASKELIS: One here and one back there. AUDIENCE: So you mentioned something about creating a culture of everybody’s aware of security. How do you balance that out with– you know, I think of a couple instances like the Atlanta Olympics where the guy discovers the bag and all of a sudden that’s the guy they go after. I mean they made a movie about this thing where this guy was the guilty party. JOSEPH BLANKENSHIP: Richard Jewell, yes. AUDIENCE: What’s that? JOSEPH BLANKENSHIP: Richard Jewell. AUDIENCE: Right, or you look at it from the standpoint of– I’m not trying to use the term McCarthyism– but turn in your friends and/or the fact that it’s like, well, if my friend’s doing this, I can’t turn them in. What do I do? So where’s the balance in there to the point where you have people that want to protect the company, but they don’t want to be the bad guy or they’re afraid of the scrutiny that could come upon them? JOSEPH BLANKENSHIP: Well, I think a part of that is create a communication mechanism where maybe they don’t have to identify themselves, kind of like Crime Stoppers. You know, did you see a crime? Do you want to report a murder but you don’t want to you have to testify? That you can certainly do something like that. But again, the cultural awareness is more about you’re protecting the company. There was a great case with a computer manufacturer where co-workers turned in somebody who was acting very oddly in their cube farm, and this person was bringing up IP on the screen taking pictures with their phone. And then they ended up catching the individual with several gigabytes of data on the phone trying to walk out the door, and by the way, had a one-way ticket to China to take all the data with him, so the FBI was called in and that person is now arrested. So it’s not about turn in your friend, it’s about, hey, let’s not triple the company so we’re all out of jobs perhaps. AUDIENCE: Troy, you mentioned this a little bit. Where do you see the convergence of physical security in IT security? And why is that– just curious of why I’ve seen that lagging a little bit in companies and a lot that are attending here is the CSO, one of the first things they do is they pull badge access, and generally a lot of companies still have those as two separate organizations, so when and where do we see that coming together? TROY WILKINSON: You see it more in some companies than others. And I love this field because obviously, I came from the physical security police side. I love it. We have the machine learning on the videos now so you can see facial recognition when employees are coming and going. We use GEO-location data to understand when we have somebody accessing the network from somewhere they don’t normally access. You have facilities that are able to help you prevent both physical and cyber-attacks. Because when you have what we call blended threat, both physical and cyber– the bad guys for 9/11, they were taking photographs and digital. They were trying to tap into street cameras. It’s a cyber and a physical threat so these things go together like this. So some companies have really embraced that and the CSO is over both physical and cyber. Some companies are completely different where they have a physical security department that doesn’t even talk to cybersecurity. So I think it’s something we have to encourage to get these guys together. And I won’t advocate it’s the same role, but definitely get these guys together to where they’re planning budgets together and bringing data together because, at the end of the day, I’m all for the data because data doesn’t lie. You start applying science to data, you start having all these insights, you can find the needles in the stack of needles– not the needle in the haystack, the needle in the stack of needles and that’s what’s so important. TODD WASKELIS: Well, personally, I want to thank you all very much for taking some time out of your day to spend with us. I hope you gained some useful information out of this. I’d like to thank my panelists for joining me here today as well, thank you very much. A few of us will be around afterwards. If you have any follow up questions we’re happy to address them, but again, thank you and enjoy the rest of summit. [MUSIC PLAYING]

Leave a Reply

Your email address will not be published. Required fields are marked *