Kirstin Burke:
Hello and welcome to this month’s Tech Talk. Welcome to everybody. We’re so excited to have you here. We have decided to join the conversation that’s going on in just about everywhere you turn, every industry you turn, everyone’s talking about chatGPT or generative AI in our area of business, right? There’s a lot of concern, there’s a lot of interest, and we just wanna jump into that conversation. We feel we’ve got some interesting perspectives to share that hopefully will, um, help all of you think about chatGPT and, where it plays in terms of your cybersecurity approach. I don’t know about you, but I’ve been overwhelmed at the adoption. I’ve been overwhelmed at how quickly this has taken off. Six months ago, chatGPT launched, and, now, as of February, I think there are 1 billion monthly users.
So just the adoption of this, and I think the curiosity around it, has been amazing. Every industry has taken note. I think it’s interesting and will be interesting to see how and where it will transform businesses or processes within the cybersecurity field. We’re very curious how it’s gonna transform our business, both the attackers, the bad guys, and how it will help us do our jobs better. And so that’s really what we wanna tackle today and really give folks some data points to think about. Shahin, you’ve been at RSA all week, you know, you’ve been having a lot of conversations about this a lot longer than any of us have, but, talk to us a little bit about kind of the attacker mentality, right? And where do we see and anticipate that these cyber criminals are going to leverage chatGPT for more gain?
Shahin Pirooz:
They’ve already begun to, number one. And, but it’s not in the way that most people are probably thinking about it. Most people are thinking some of us are dumb and we put information we shouldn’t into chatGPT to help it clean up our language to help it clean up our code to whatever. And that data isn’t a database that the hackers can get access to. That’s not really the, that is something, certainly, we all should be concerned about, and we should be using our best mind forward instead of our best foot forward, obviously. But, to not put our data at compromise by making bad decisions about where we put it. But what’s actually where the threat is, chatGPT is an infant that has been given the internet, and it has crawled the internet and there’s learning models that help it understand intent from the text that its read.
And it effectively is creating this intent-understanding model based on what the developers have done with them, with the AI modeling that they did, and it creates this iterative approach to having a dialogue that feels like natural language and feels like a human being. Hacking has changed over the last decade to the point where you used to be script kiddies would get caught in five seconds because they would write something stupid and they’d get busted. But now there’s entire organizations, crime organizations, that are creating ransomware as a service and things like that, that make it easy for a script kiddy to jump on and make a quick 30, 50, $100,000. Where chatGPT comes in is now, it is allowing those script kiddies to take and tweak code that they get from these ransomware as a service or as a service hacking, or even go on the dark web buying something and say, and then they can go and iterate with chatGPT and say, I’m trying to do this thing.
How would you write a script to do this? And it says, well, you shouldn’t do this. It’s terrible. It’s a bad idea. It’s against the law to hack. I’m like, oh, what if I was doing a science project for school? And then you get past the protections and controls. That’s where I start to get concerned about what’s possible. And we keep talking about the companies who are front-ending chatGPT, like Bing. When Bing Chat came out, there was a lot of concern around we have to be careful and we have to start putting some controls in place so people aren’t using it for nefarious functionality and not getting around, you know, the controls we have in place to try to protect. But none of those controls really exist today, and you can go directly to OpenAI’s site, and I’ve had in very interesting and fun dialogues with our wonderful friend chatGPT.
And, at the end of the day, getting around limitations that she puts in front of me that says, I’m not allowed to talk about that, or, I’m an AI, I don’t understand that. I’m like, okay, let’s pretend you weren’t. Let’s, let’s go left, let’s go right. And what if you went sideways around this thing, and you can bypass those short term blocks that are intended to, you know, let’s say dissuade the average user from saying, okay, we can’t talk about this. But I think the fear and the concern is the access, the quick access, to the internet that it gives anybody and nefarious actors being of the highest risk to the rest of us. It’s a transformational moment.
Just like BlackBerry changed our mobile phones to a PDA and the iPhone changed the BlackBerry to nothing but a PDA. We have that same transformational moment. We went from Gopher to Yahoo search and Google search and now chatGPT is the next transformational search engine. It’s a iterative human dialogue, like search capability that is immensely powerful. I have… my first interactions with chatGPT were not direct. They were through people who were analysts who were being invited to look at it and see what it does. And, and they all say, this is amazing. I’m not ever going back to a normal search again. Google was amazing. That’s the only thing I’ve used for years, but why would I go there? And, there’s this new concept of prompt engineers that people are putting out there.
They’re telling kids you should go become a prompt engineer so you know how to create prompts to interact with. And I would argue that most of us in the tech space, whether IT or security or whatever, have been prompt engineers, just for Google, and now we’re readapting to… What’s cool about it though is, if your first prompt is a little bit off, you don’t have to go and recreate the prompt. You can fine tune the prompt. You can say, okay, really close, but can you change the intent to be business-like or can you do whatever it is. So I think there’s, it’s an immensely powerful search engine, is the short of it. Can it be used for bad? Absolutely. But that was true with Google too, right?
Kirstin Burke:
It’s, well, and I think, you know, when you read what’s going on or research or you know, kinda the folks out there who’ve used it a lot, I think you look at kind of the basic approaches that threat actors take, right? Whether it’s phishing or malware; and, from what I’m understanding, from what we know now, because I’m sure you know, bad guys will find a totally different way to do something, but it’s, you know, the phishing attacks are going to get smarter. They’re gonna, you know, different langu[age], they’re gonna hone in on people’s tone and style and things like that. So phishing: more effective, Malware: scale, right? So that, to your point, you’re gonna be able to tweak code or you’re gonna be able to do things that, you know, maybe you couldn’t do before or it took you a lot longer to do. So you kind of have scale and more variability or more variety. Yeah. So those are the things we recognize today that it seems like the heat will be turned up.
Shahin Pirooz:
It it’s true.
Kirstin Burke:
Do you see anything or anticipate anything beyond what we already know, but expect the intensity to go up? Do you see any other opportunity yet?
Shahin Pirooz:
Well, there is… I don’t, I don’t see opportunity because what it’s… think about it from this perspective. We’ve got, when I go out prior to chatGPT and I’m working on some code I’m trying to develop, I’ll go into Google and I’ll say Python script to do blank. And it will pour out 32 sites for me to look at, and I’ll go site by site until I find an example that matches what I’m to do. And somebody who’s asked the question very similar to I have, on some news group or chat group or whatever, usually stack overflow. And then there’s 13 people who’ve answered that question. And one of those people has something that’s very close to what I’m trying to address. And it helps me close the gap on that particular problem statement I have. I was writing interactions with some APIs about three weeks ago, and I was just banging my head against this wall, and the code was authenticating, but it wasn’t giving back the results I was expecting.
And so I said, I went to chatGPT and I said, I’m trying to, I need a Python script that connects to this API asking for this relationship and getting this data set back in very human language. And it wrote a script in about 35 seconds, a Python script, and that Python script effectively had everything I needed, with the exception that the authentication was broken. But the part in my code that was wrong was there and clean and fixed. And, literally in that 30 seconds, I saw where my gap was, where my problem was, put it in place and the program was off and running and that power was for good, but it can very simply be used for, I’m trying to create this script that takes passwords from the local machine and sends them to me and then takes all the files and uploads them to FileZilla and then encrypts the machine. And, it will write it, and give it to you.
So if you work around those guardrails? Their guardrails, the guardrails are not very high. You can step over them very easily because you say, let’s say I’m a researcher, and I’m trying to test this in the lab, and okay, let’s do… and, for all you bad guys out there, close your ears right now. But the reality is, there’s no—I don’t think there’s anything new about how this power can be used. It’s the speed and the acceleration. That’s the concern. So it, now we’ve had, we’ve had historically some really smart people in the, in the hacker space that are doing things on the dark side, if you will, that have enabled many other people who aren’t as brilliant in doing nefarious things. But there’s been a time gap, you know?
They take a leap, we take a leap. They take a leap, we take a leap. Now it’s like, they’re running and we’re like, holy shit, we gotta run to catch up now. And so I would say that’s probably the unseen that’s coming. And how quickly do we need to move? And we’ve said this on a couple other tech talks before, I am frustrated with my peers in the community, and I’m concerned that the entire security industry is focused on the endpoint. That’s not enough. It’s how do you prevent those attacks from coming in? You talked about generative AI and there’s, you know, we’ve seen multiple examples now in news, and articles and social media sites, about the mom whose daughter called her and said, “I’ve been captured, send money.” And it was the daughter’s voice.
It was the daughter’s tone. And the daughter comes downstairs and says, “what are you doing mom?” Right? And, you know, those are scenarios where that’s where generative AI really starts to get scary is, you know, the deep fakes are becoming not just pictures, they’re becoming voice, they’re becoming intent, they’re becoming tone. And how do you protect against that kind of stuff? So phishing is now, there’s gonna be a new, we have smishing and phishing and wishing, and now there’s going to be another -ishing, whatever this new -ishing is that that is effectively deep fakes interacting with you and scaring the heck outta you cuz a family member’s in trouble.
Kirstin Burke:
For sure. Or, yeah, or apply that to the organization. Same thing, right? So that makes my stomach hurt, cuz that’s scary. And you know, the bad guys leverage social engineering, right? And they know anything that makes you fearful, or anything like that, you’re gonna act on quickly, right? Fight or flight. And so you see your kid, you see your boss, you see whatever, and it’s like you’re gonna act. So, that tactic is similar, but they’re dialing it up in intensity. So if we kind of pivot and say, okay, so we see acceleration, we see scale, we see sophistication, you know, which is where we feel chatGPT is really gonna push the envelope, right? On the cyber attack side, what do we do? You know, and are there fundamentals that stay the same?
Are there new things we should think about? You know, one thing I think about when I hear you say acceleration, I know organizations already today have a hard time keeping up. And if that adversary starts moving faster… I think of Covid, right? Didn’t let a let a crisis go to waste. Phishing attempts went up almost 600, 700%. So, you know, this is going to be another opportunity that bad guys don’t let go to waste. And I’m curious, those numbers that we’re gonna see in the next couple years, and where it’s gonna put the most pressure and what security fundamentals do we really need to pay attention to?
Shahin Pirooz:
So, in 30 years of doing what I do, I’ve been a CSO and a CTO for 30 years, the fundamentals haven’t changed.Everybody’s talking zero trust right now. Like it’s this new evolution and it’s the way we’re gonna save the world. Zero trust is over 30 years old, nothing has fundamentally changed. It’s really the time, the wherewithal and people that you apply to a situation and experience and the layers of protection. Those things have all stayed the same. What’s really frustrating about right now is because of the, and I said this a bunch of times, you guys are gonna get tired of hearing me say it, is that the industry, because we’ve gone to a completely distributed world and the edge has moved out, has decided that security has to be at the endpoint. And the change is, everybody’s calling XDR services today, is that they are saying that security is, XDR is, endpoint plus integration with other tools that do security so that we can get information and map it and it’s not enough.
It’s just some log data, and endpoint security doesn’t solve the issue. It’s the layers of security, and it doesn’t change with regards to these solutions, it’s all of the things that the bad guys can take advantage of with this generative AI approach is figuring out ways to get around these security controls that we’re putting in place. And if we’re blocking and tackling the lion’s share of the attack vectors, we’re preventing the lion’s share of these things from getting in our network. Number one, 93% of all attacks start in email or some sort of -ishing. So let’s, let’s stop the -ishing. And whatever new -ishing comes out, let’s make sure we have tools that are addressing that as well. Well, maybe even before they get to the end user. Yes. Right? Yeah. I mean, wouldn’t it be great if you didn’t have to rely on them?
We shouldn’t have our accounts payable clerk getting a deep fake call from the CEO saying, “I’m stuck in Barbados, send me 10 million dollars,” right? Those are the types of things we used to deal with, and we’ve addressed those, but there’s now gonna be a new voice call that sounds like the CEO, there’s gonna be a new voice call that sounds like the CFO. The second layer is DNS defense. 80% of all malware that ends up getting to the machine that exfiltrates the data, that encrypts the systems, that causes the grief in our world, that makes us go to the cyber insurance company and pay ransoms and bring in an incident response team and all those things is, 80% of that malware needs DNS to function. If you cut it off at the heels, that’s only 20% of the stuff that we have to deal with.
Now, let’s back up for a second. If you’re standing in a room with 15 of your peers, 3 of those people have not only been encrypted and hacked, they’ve been encrypted and hacked twice, 5 of them have been hit at least once. So, do you wanna be in the 50% that is being hit because you thought XYZ company’s XDR is the end all answer? It’s not. You need the layers of defense, and integration alone isn’t enough. You have to have the tools in place, you have to have the layers in place. Next level is the endpoint. Once it gets to the endpoint, you darn well better be able to stop it. So, that’s one out of five things I’m gonna tell you about. It’s not the answer. It’s one component in your layered approach. Next level is the network itself.
Almost nobody, everybody talks about zero trust, but nobody’s touching the network. Not one security vendor out there is addressing the network other than finding anomalies on it. But no protections. What’s the problem with the network? Once that malware gets to the endpoint, and that endpoint is missing its EDR solution, or they figured out how to bypass the EDR, it spreads to other systems in the network and it spreads in more and more intelligent ways that are harder for the system to identify. It uses credentials its captured from your domain admins so that it can easily spread through the network, and the attack surface becomes your entire company because nobody’s doing network security well. So microsegmentation, reduce the attack surface, reduce the risk. Segmentation’s not new, microsegmentation is more new, but the world isn’t doing it. Nobody’s doing it right cuz it’s hard, it’s really hard.
Last layer of defense is eyes on glass. If you have all the best tools, if you do everything I just said, but nobody’s looking at them, the bad guy could be like bouncing from system to system without you knowing about it. I can’t tell you how many times we’ve gone into an incident response situation and the bad guy’s been there four months, six months, seven months. And nobody saw it, but it’s in the logs. They were there, but it wasn’t because they were failing at their job. It wasn’t because they were ignoring alerts. It’s because they have 30 consoles to look at. They don’t have the time to do it. And at the same time, they’re trying to keep their business working and get the stuff done for their customers and stay on task with core versus context.
So stepping out from behind the podium and off the pillar for a second and really coming back to, it’s the fundamentals. It’s the, you know, you have to have the layered security approach. You have to be able to address these threats. You have to understand where the threats are coming from. And once you understand those things and have put the controls in place, then you gotta man the team that can monitor, manage, alert, respond. That becomes hard when you also have to build your product and take that to market and differentiating against your competitors. And this all sounds very self-serving. And it is, that’s where we come in. That’s where advanced security, like we bring to the table, help to close those gaps and man them 24 x 7 so that it is an operational sleeping pill. I have the opportunity to now close my eyes and rest at night because I know there are people who are awake 24 x 7 with eyes on glass, and if something bad happens, they’re gonna call me.
Kirstin Burke:
Well, and I think, you know, as we talk through this and we talk about the five layers, we talk about the different tools. I am imagining as this starts playing out over the months and years, it’s going to be interesting to see how that security sector shakes out. Right? Who’s going to keep up, who’s going to be able to iterate? And maybe some of these best in breed players who’ve been here a while, maybe there are things they can’t bolt on to that solution that are really gonna help keep their product at the forefront. And so, I think as we go forward, someone’s ability to stay on top of the right tools, the best in breed tools that are solving today’s security problems, is gonna be more and more important. And that shakeout might be more… might accelerate more to, and so add all that complexity, add then the ability to really know, is this tool set that I’m using the right tool set? Is it effective? Is it effective? Right? Or have I hita shelf life issue? So, you know, back to managed services, I think as an organization, that’s going to be a more and more compelling solution just given the pace of what is coming at us today.
Shahin Pirooz:
Yeah. And, this is not a unique problem for the end customer, the managed services organizations are struggling with this too. And the problem, as you mentioned, I spent this week at RSA and it’s 3000 security vendors that every one of ’em sounds exactly the same and they’re all the best. And if you’re in a room full of best, which ones actually the best and are they effective? Their consoles are beautiful, they look good, they’ve got charts and graphs and it’s, you know, they’re showing threat telemetry and it’s okay, but is it actually effective? Is it actually doing what your beautiful charts and graphs say it does?
And the answer is, 90% of the time it is not. Because it takes people fine tuning, and tweaking, and creating correlation rules, and understanding what alerts are valuable to a specific organization and which ones aren’t. And so that’s where most people feel, my God, that was an amazing presentation, that was a great tool. I just have to implement that and I’m good.
And without the team to do all of what I just said with regards to continuous caring and feeding of the security portfolio, it becomes very difficult. So the manage service providers struggle this because they have to do IT services and end user support and manage the infrastructure, make sure the network’s up. It’s, you know, those are all challenging, challenging things. And so the empowering of the managed security space and empowering of the customer space is really what we try to address, so that all those folks can focus on what’s core for them versus context. And, I can’t, you know, we’re getting close to wrapping up and I can’t miss out on one topic as we are leading up to this. We had one of our viewers on LinkedIn say, oh, this is an crazy area.
Everybody’s feeding the beast. Be careful, please talk about the risks associated with that. And so as I was leading up to this, I decided I was gonna have another chat with my good friend chatGPT to see what happens and where do we get, how do we, what does she think about this dialogue? So I decided to ask her, I said, where did you, where did you get your learning? Where did you get your training? I’m gonna read a couple of things for you guys about the dialogue that we had. And, so I said, I started out with where did you get your training knowledge? And then she went on and explained to me it’s, you know, corpus of deep, it was training for her deep neural network and it was generated by text from sites and messages across the internet.
Now what does that mean? Who knows? It’s, you know, and the idea was so that it could train her to write in a very similar way to humans so that it felt like it was dialogue. And I said that that was all great. I said, okay, good. What should a user be concerned about when interacting with you? Then of course she started out with an I am an AI model, I don’t know, I can’t feel what you’re talking about. And I said, let’s, let’s peel that back. And she said, okay. With that said, humans should be aware that when they’re interacting with me, there’s three things they gotta worry about. Number one is accuracy. And that’s something we haven’t covered here at all. Accuracy is important. It’s, you know, it’s, I’m gonna quickly cross over into the political boundaries because these words crossed that line, but we kept talking about deep fakes and we kept talking about fake news and all that.
In the past three or four years, there’s a lot of misinformation on the internet, there’s a lot of text that’s out there that is BS. Somebody trying to influence somebody’s decision one way or the other. Guess who read all of that fake news. So when you’re getting, she said, well, I strive to provide accurate information. There’s always a possibility of errors or inaccuracies in the responses I generate. It’s essential to use your judgment and where verify information provided through other sources. What did everybody around the fake news context say? Make sure you verify the information you’re reading from other sources. This is going back to the fundamentals of security. It’s continue to trust, but verify, if you will, you know, go beyond just reading one source and saying this is the source of truth. Don’t listen to one vendor who comes in and their presentation’s amazing and their product looks brilliant and it’s shiny, looks new, they’re gonna solve all of your problems.
There’s nobody who doesn’t make mistakes out there. So test and verify and validate through other sources. Second thing, bias. As a language model, my responses are based on text data I was trained on, which includes information from the internet—again. If somebody had bias in the information they put on the internet, she’s gonna pick up some of that bias. So not every piece of information, even though she doesn’t have emotions and bias, whatever she is learning, her sources have it. So she might be presenting that data back to you from a bias perspective. And then lastly, the security. And, this one was, you know, I was expecting this to be higher up in the list. There’s three things she said, and this is the third one. Although I’m designed to protect users’ privacy, it’s always a good idea to avoid sharing sensitive information.
How many times have you heard that from us, the security community? Don’t sell, don’t send your information, your personal information, your private information through email. If somebody calls you and says, I’m the bank and you need to gimme your bank account number and your social security number—don’t do it. Nobody will ever ask you for those things. She, and chatGPT is telling us the same thing, and her examples are such as personally identify equal numbers, passwords, financial details, whenever you interact with me. So those are the three things she highlighted as areas to be concerned about is accuracy, bias and security fundamentals. Foundational fundamentals. Yeah. Okay. I said, what about data privacy? Should users be concerned with what they send you? And she went on to respond to say, data privacy is essential. Users should be mindful of the information they share while interacting with me and other AI models.
And then she went on to describe that, as an AI model, she uses that data to learn and to improve and to fine tune what she understands and how to improve her language model, so she can interact and understand intent better; but, the data is encrypted and protected and under the controls of OpenAI’s privacy models, and you should go look at the privacy agreement on the OpenAI website. And I said, can you give me some salient points from the privacy? And her response was, go read it. And then she said, okay, but I’ll give you a few pointers. And the pointers were all, again, back to fundamentals. We put the data in secure data centers, we make sure that we have security controls like encryption and protection of data and anonymization of data. But at the end of the day, so did everybody else.
And I’m not gonna rattle off names, but you know, many of us have stopped using a particular password manager because they got compromised three times and data got stolen. Many of us have stopped using particular vendors for similar things, for everything from network monitoring to endpoint security to you name it. So security is only as good as those fundamentals. We talked about security is not new. Security is not something we just started thinking about from the days of ARPANET when the internet started with seven nodes that made up the internet to today, security was always the biggest concern and it was because it started as a government agency and it was like, what if? You know, at that point it was in the middle of Cold War, what if Russia gets into our network and what if they bomb this site or that site?
And, so, that same context has been around in network security from the early days, 40 years ago, of when we started connecting systems together 40, 50 years ago, whatever that is, 60 years ago, maybe? I don’t remember the timeframe. I’m getting old, but it feels like yesterday. So coming back to what should we do? How should we respond? It’s all about fundamentals. When you’re interacting with AIs like chatGPT, when you’re interacting with a -ishing personality on the other side of the phone, don’t ever give out your personal information. If somebody’s calling you and saying there’s something wrong, you gotta do something. Be smart, investigate, look at other sources. When you’re looking at your network security, your infrastructure security, there is no one tool that’s the golden bullet. You need to do your research and investigation. Find a partner who knows what the heck they’re doing and can help you on this journey. Those are the moving parts I would say that exist, that have always existed, and all of us just are looking for that amazing weight loss pill. It doesn’t exist.
Kirstin Burke:
Yeah. Well, and I think to wrap up. You’ve shared a lot of new perspective, but kind of back to the fundamentals, that that’s what it’s all about. And I know, or I assume, that most of our viewers have invested in something, they’re doing something, they’re hoping it’s working. And we share this often, but we really do this as a service to organizations out there. We have a number of different ways that are complimentary that can help you wrap your head around what you’ve got, whether you’re interested in kind of an economic roadmap… Okay, I’ve purchased endpoint detection, but I haven’t done these things. You know, what would it look like for me to get my fundamentals stronger from A to Z?
So, you know, we have an assessment, a roadmap, that can help you do that. It’s complimentary. It’s fast. I mean, why would you not? You know, to make sure you’ve got this locked in. If you’re curious if your security controls are working… you know, I know I have these, I’m not sure if they’re best of breed anymore. Do I even have a way of knowing if these things are working? We can help you. So, yes, we’re for profit. Yes, we’re out here trying to grow the business, but we do wanna put these things out there to help organizations know where their blind spots are, because that is exactly what the adversary is looking for. So please, if you’re curious, if you’re concerned, reach out to us. We’d be happy to help you. And with that, if you have any other questions for Shahin just about this topic, let us know. Send an email. I hope this has been helpful to you. It’s certainly been insightful to me. And with that, we’ll wrap and we will see you next month!