Kirstin Burke:
Welcome to DataEndure’s first TECH talk of 2024.
Shahin Pirooz:
My God, it is the first one.
Kirstin Burke:
It is the first one.
Shahin Pirooz:
Wow.
Kirstin Burke:
’24 marks a very special time for DataEndure, it is our 40th anniversary. Before we get going, happy anniversary.
Shahin Pirooz:
Happy anniversary.
Kirstin Burke:
Happy anniversary.
Shahin Pirooz:
And thank you all for the support.
Kirstin Burke:
Yes, absolutely. Think about the change, think about how the pace has changed, how technology has changed. And 40 years, we’ve been going through and just looking at things from 1984, and wow, quite a difference.
Shahin Pirooz:
It’s a very different place. We were talking about parachute pants.
Kirstin Burke:
I was going to bring it up, I didn’t know if you were. Anyway, we just want to thank you for joining us. This year’s going to be a fun year for us as we go through different ways to celebrate 40 years. But what we’re going to spend some time on today are just trends. New year, people go through new goal setting, evaluating what they’re doing, what they’re not. And it’s just a good time to reflect. And so I spent some time just looking at what the market and what the industry said were some top trends in our market. And I’m just going to rapid fire to Shahin, and we’ll get some input from you in terms of up, down, issue, not issue. How do you weigh-in on these things having been in this industry for 40 years?
Shahin Pirooz:
It’s getting there.
Kirstin Burke:
Close to. For a while.
Shahin Pirooz:
Although, in ’84 I was a junior in high school, so I hadn’t started really working in the restaurant industry, but not this space.
Kirstin Burke:
Anyway, I think the first trend that we bring up, obviously you see it everywhere, you hear it everywhere, is AI. It really took everybody by storm last year, and I think there’s an expectation now that… I don’t even know if the dust is settling yet, but there’s going to be a maturity. What is the impact going to be? How are businesses going to adapt? When you think about AI and what people should know or think about in 2024, what’s your perspective?
Shahin Pirooz:
Yeah. Before bringing the ’24 into play, we’ve talked about this when we talked about the evolution of endpoint security from antivirus to EDR to MDR, to XDR, to blah, blah, blah DR. Whatever the end state we get to. Same thing happened back in the day with cloud. We came up with this concept of cloud. Then we said, you know what, even the on-prem stuff is kind of a cloud, so let’s call that private cloud. And then we got public cloud, and then we got distributed clouds. And so, I’d like to now bring our thinking back to AI has become another marketing term. It is not a technology, it is not a thing. It is not something you could put your fingers on. There’s a handful of underlying functionality and technology and capabilities that make the concept of what was supposed to be artificial or augmented intelligence a reality.
I would say up until this past year, when ChatGPT shook up the world, we didn’t really have a notion of augmented reality that us average humans got to see. In labs, sure, but publicly available and accessible. But every single security provider, a ton of different data providers all had AI built into their stacks. And they have rushed in ’23 to jump on the AI bandwagon to say, “We’ve been AI since 1999.” I’m exaggerating, but you get the idea. The issue is that there’s really two different complete categories of what is AI. And I’ve forever hated the concept of artificial intelligence, and really focused on it’s augmented intelligence, it’s helping. Because all of AI, prior to ChatGPT, was really machine learning and deep learning. It was culling through the data and finding trends and patterns, and so on and so forth, that then a machine could make decisions on.
Fast-Forward to 2023, the exposure that ChatGPT gave to the whole concept of generative AI. And a lot of what was learned in trying to develop AI to make decisions on a tech and data perspective really became a pattern that allowed us to take natural language processing, which is understanding natural human language, take into that and put machine models that allowed us to say this model is based on learning and behavior of a person, so that we can create this interactive human-like chat functionality. As opposed to the traditional chatbots, which were basically if-then-else logic. We’ve moved away from the if-then-else logic of what we used to call AI, to something that is interactive and more natural language. And I think it’s important to understand, that while that has made a huge impact to business and there are some huge advancements that are potentially going to impact white collar worker type roles, is not going to change the way we do business fundamentally.
Generative AI isn’t going to be your security analyst. The old traditional AI that finds anomalies and detects patterns that are mistaken, that still exists and that has been existing, and that will continue to do it. This whole push that AI is going to change the world, it’s only going to change the world in terms of a consumer interaction, in my opinion, it’s not going to change the world in terms of underlying tech changes. It will help us, because these machine models can now be built that are more technology related humans, if you will, as opposed to just the white collar.
I’ll give you, for example, one of the things that we’re working on in the labs is, how do we create a security analyst generative AI? But it’s not intended to replace a security analyst, because we can never rely on that generative AI to be able to ask the question to say, “This looks funny, I’m going to dig a little deeper.”
Kirstin Burke:
The discernment.
Shahin Pirooz:
Yeah, the discernment is missing. But the understanding intent and being able to answer questions based on a body of knowledge, that’s actually really valuable. That takes time that our analysts have to spend answering questions for clients and partners, gives that back to them to do the discernment and allows the generative AI to do it. I think there’s some huge impact to business from that perspective, but to think it’s going to change the way we do business, I’m not there.
Kirstin Burke:
Yeah. Well, and I think your example is great. There are a lot of things that we do in any one of our jobs. It’s like, oh gosh, if we could speed this up, or if we could get someone else to do this, or if we could have someone dig through this for us, or whatever it is. Where are those areas where that acceleration can help the more trained individual do their job better?
Shahin Pirooz:
Exactly.
Kirstin Burke:
We’ve seen it in sales, we’ve seen it in marketing, we’ve seen it in tech. And so, that does make sense that if you put that frame of reference in it around security, that it can help you do our job better, it can help us find things faster, it can help us isolate things more precisely if we harness it in the right way.
Shahin Pirooz:
And there is certainly a security implication of generative AI that comes to the table, but it’s no different than any other type of machine learning or data processing, or anything in the data science space. There’s always been the notion of poisoning the well. And now, because these models are going out and scouring and grabbing information from the public internet, it’s a lot easier to build a site that can easily poison the well and look like real data. That’s what we’re starting to see, which is giving malicious links and things like that into the platform. Which is why almost every generative AI is saying we’re not going to give you links in our responses, because it’s very easy to do that poisoning of the well. But I would argue that nothing’s changed. We always had the poison in the well problem. We always had, go check your sources, go and inspect what you expect. Those things are not new.
From the moment two people communicated to each other on a computer in different locations, and I’m going to go back to the days of Cisco’s foundation, we had the problem of that something in the middle can change that communication and act like a man in the middle and do something. From that moment on human communication was no longer this trusted interaction. It was, I’m reading something and thinking it’s coming from someone. The implications that AI brings to the table is now we have the very deep fakes that are able to mimic voice, mimic characterizations, mimic the look and feel of a person. And so you can get a voice call from somebody and it looks like it’s your boss, it looks like it’s the CEO, and they’re asking you to transfer $100,000 into their account.
Kirstin Burke:
Trust, but verify.
Shahin Pirooz:
Security awareness is important. But ultimately, it’s nothing new, it’s just more complicated.
Kirstin Burke:
It makes it all that more important that these different layers can do the job they’re meant to do, fill gaps, support the other layers. It’s just a more pressure test on the security layers that you ought to be having in place anyway.
Shahin Pirooz:
Yep.
Kirstin Burke:
All right. Well, speaking of layers and teams, another trend that I believe will be controversial for us, a lot of folks out there talking about the convergence of IT and security. Given the level of threats we have, given how the threat landscape is changing, it’s not just isolated to a firewall or just certain areas.
You got to care about how your storage is configured, you need to understand how your clouds are configured, and that some of those fall into the realm of IT. And so, how do you feel about this 2024 trend saying, well gosh, maybe we ought start converging IT and security teams?
Shahin Pirooz:
Quick public service announcement, I am extremely biased on this topic. I’ve been doing what we do for 25, almost 30 years, and that is outsourcing IT technology and security technology. And most recently, over the last decade, focused strictly on security. I’m biased in that context. But here’s, I would say, my perspective on the whole thing. Is if you remember back to, again, I keep tapping into cloud, going back to cloud. We all of a sudden decided that we need a virtualization engineer, we no longer need a server engineer and the network engineer. Because we have now virtualized the infrastructure, and so we need somebody who can work on top of that virtual stack. And it’s hard to train the network guys to do server stuff, and it’s hard to train the server guys to do the network stuff, so we need a new category of engineer, the convergence of those two functions.
And then we had the realization, well security is still hard, maybe we should take the firewall out of that. Everything else can merge. So VLANs, segmentation, VPCs, all the things that make up the core. Think of it as the internal network and the servers that run on it, whether it’s in cloud or on-prem, that stuff should be a virtualization engineer, or a cloud engineer, or whatever term you want to give it. We’re now faced with the same thing that the market is telling us we have to do. At the end of that whole convergence concept we realize, you know what, a virtualization engineer can create policies to segment servers from each other, but they don’t really understand why. They don’t understand the benefits of it. We really do need somebody who understands the network. And the network, the folks who transition from network to virtualization, they don’t understand why specific amounts of compute or memory or whatever are important to a specific type of application, or how to fine tune an application, how to make a database run faster. So maybe we were hasty in our judgment to rush to this thing.
Of course, back then I was very biased to say, “You don’t need any engineers, you can outsource it all to me.” So that’s what I’m saying, public service announcement was heavily biased there. Even in that context back then we learned very quickly that you can’t do that. Because as an outsource provider back in the day, I started my career in EDS, my tech career in EDS, which was heavily outsourced. Which is, your employees are now our employees, you have no IT team left. To a more managed services model where we started the first managed services in the country. And what we came to realize was, in fact, we can’t possibly know how to leverage technology to make a key differentiator for that company based on technology against their competitors. All we can do as an outsource provider is level the playing field and make IT commodity so that they don’t have to worry about the Joneses are doing better than them, or they’re not doing something they ought to be doing.
But then on top of that, how do you take advantage of data science or technology in the field, or any of the other number of things that can uniquely differentiate you from your competition? You had to have engineers on staff, whether it was architecture level or engineer. That was the point where MSPs really started to become something critical of the stack. Because now the commodity IT, the help desk, the user support, the patching, the basic stuff which nobody wanted to do can be handled, but the real focus on the business ended up being internal resources. So fast-forward today, it’s the same thing. There are plenty of IT people who have good security experience. Because historically many companies, especially smaller ones, didn’t have the budget to separate those, so they’ve been converged for decades.
But the same thing kind of applies, fast-forward today, to the market is also at the same time saying you can’t possibly keep up with the 3,500 security vendors that are out there, do the shootouts, do the evaluations, pick the right tool. And by the time you implement it, that tool’s obsolete and you got to go do it all over again. And it takes you a year to implement the technology. And by the time you’re effectively no longer effective and the tool’s outdated and you got to start over. I’ve always likened it to painting the Golden Gate Bridge. You get to one end, you’re starting again. And so the market has been saying, look for MSSPs, outsource your security and focus on what differentiates you as a company. Make security a commodity. And you’ve also heard me say that the acronym MSSP has really become muddied with the confluence of people who just added a security tool to their stack and call themselves MSSP, but they’re not true MSSPs. So buyer beware in that category.
But should the convergence happen, I think as a result of if you do follow that outsourcing mantra on the concept and the benefits of continuous improving platforms and technologies, then yes, you don’t need a dedicated security team. You don’t need people who are 100% security focused. And the concept that smaller companies are taking advantage of, which is my IT people have understanding of security and know how to talk about it and know how to interact with it, but they’ve got a 24/7 security operations partner that is telling them, “Here’s the area to put energy and focus into. Here’s the risks, here’s the vulnerabilities, here’s the things we got to address.” In that context, yes, a convergence makes sense. But in the context of if you’re doing it all yourself, absolutely not. There has to be a separation of responsibility, because you can’t have the wolf watching the hen house.
Kirstin Burke:
Well, yeah, and I think that tension or that pressure where one… Liken it to sales and marketing, or think about an engineering team, there’s a healthy tension there that makes sure each one is checking the other, if you will. And from some of the customer conversations that I’ve heard, when you try to do it all, IT often has more of that level of urgency. Something’s broken, something needs to be fixed, we need to get on this. Or something needs to be built and developed by this timeline. And security can sometimes be relegated to, okay, when we get around to it. Well, we implemented this, we check the box, we’ll go back and inspect it later, we’ll go back and we’ll look at the alerts later.
And there’s almost an impression that I can set it, forget it, and get back to it later. And I think in the world we live in, we can’t have that. We have to have someone having the same diligence on security that we do on, hey, we got to make sure the help desk is available 24/7 for the executives. Or, hey, we have to make sure… Well security, same thing. We have to make sure that someone’s on it 24/7. We have to make sure someone’s watching. And I think it’s very hard to do that when you’re trying to converge-
Shahin Pirooz:
When you’re also supporting the CEO, or whatever the case may be.
Kirstin Burke:
Yeah.
Shahin Pirooz:
Part of the challenge with this whole convergence concept is that the truest and oldest security concept is separation of duties. And every regulatory concern… If you’re regulated, that whole thing is a horrible idea. Because you have to have separation of duties so that the people who can make the changes have a set of checks and controls that say that those changes are monitored, controlled, approved. And that is going to slow IT down to a degree that they won’t be able to operate. But sometimes then in… I’m saying this from a security practitioner perspective. In my history, when I was much younger and had far less gray hair, I used to enforce security that hindered the progress of the company. To be secure. Because, no, some bad guy’s going to be able to get in. We can’t do that. But in reality, we have to take this balance of security needs to be much more of a consultative role in the organization.
And part of the challenge we see is, back in the 90s the CIO started to get visibility as a board level position. And the CISOs started as a new function underneath the CIO. Today, with security being such a top of mind board thing, the CISO is not just at an executive level, the CISO is a board level seat that sits in the board and presents all security posture and status to the board. That context means that now if the CISO is reporting to the CIO, you don’t have the separation of duty, but the board recognizes that it needs to be at that top level. And so I question whether this is the practitioners saying we need to merge the function so that this separation of the board is given direction to the CISO, the board is giving direction to the CIO, and there’s a battle between them needs to go away.
Maybe that focus should be, how do we take and get rid of this notion of chief information and chief information security officers, and merge the executive level? And put the responsibility, let the executive level be a balanced converged position. But the teams themselves, you can’t have an IT person be a SOC engineer. You can’t have a SOC engineer be an IT person. The concept is, it’s one of those unintended consequences of the trailing implications here.
Kirstin Burke:
Cool. All right, well moving along. Next item that seems to be top of mind with everybody, and interestingly enough we’ve been talking about this for a very long time, is zero trust. COVID split everybody to everywhere with devices everywhere, personal devices, corporate devices, networks, Starbucks, wherever. All of a sudden zero trust as a concept has really jumped to the top of the list, which is understandable, relevant, correct.
But without rehashing everything that we have said about zero trust, and respecting people’s time, what are a few things you would say about zero trust in 2024?
Shahin Pirooz:
Yeah. Fundamentally, under the core without rehashing, zero trust has existed for 30 years. It’s not a concept that’s new, and it literally means moving from an implicit to an explicit trust model. So create explicit policies rules rather than implicitly assuming something is in place. What does that mean as we go forward? We spent a lot of time last year and in ’22 talking about how VPNs are fundamentally broken, and ZTNA, zero trust network access became a big thing. But all that had happened is most of the manufacturers and technology providers took the VPN concentrator, stuck it in AWS and said, we’re ZTNA. Nothing has changed from a functionality perspective, it’s still the same insecure VPN concentrator, it’s just not in your data center it’s someplace else. I think when we talk about zero trust, we really have to get back to that implicit versus explicit context, and try to understand what it is.
If we truly want to implement zero trust and not just jump on the marketing bandwagon, if really we’re trying to secure the environment. And assume from the beginning that I don’t trust this device, this individual, until I’ve validated they are who they say they are, then there’s a lot of moving parts that go into that. There’s identity that goes into it, there’s actual device inspection. We used to call it network access controls, but it’s no longer network access controls it’s device access controls. Because they could be anywhere, they’re not on your network. That DNS protection, monitoring SaaS applications, zero trust takes a much bigger… And SSE is probably a great place to think about where zero trust and cloud security come together. An SSE is secure service edge, and it’s the subset of the SASE, secure access service edge, that simplifies implementing SASE.
So take SD-WAN out, which is really difficult and complicated. And people said, “I can’t do SD-WAN, it takes me three years to implement that.” We do it in 90 days. But take and implement the core components of zero trust network access, endpoint security, user identity, those types of things, and create a concept that is really more about fundamental. How can a small, mid, and larger half of the enterprise space but not very large enterprise, how can they take advantage of zero trust functionalities and be effective? Look at secure service edge as a way to assume as a model, but don’t just jump on the marketing bandwagon, dig in. Did they simply move the VPN concentrator to the cloud?
Kirstin Burke:
Yeah, make sure you’re not picking something where the problem’s been moved and you’re getting a solution that’s…
Shahin Pirooz:
Or it’s rebranded, because again, they wanted to jump on whatever this new bandwagon was. Is zero trust important? Absolutely. Is it a new thing? No, it’s always been important. But we just haven’t implemented it and it wasn’t as big a deal when we didn’t have distributed assets.
Kirstin Burke:
Got it, got it. Two laps. And one, I believe we’re going to hear the same thing you said earlier is, nothing new under the sun. But social engineering still top of mind, 80% of breaches occur through compromised identities in one form or fashion. I think the tactics of social engineering just continue to evolve. All living on social media in some form or fashion helps accelerate and exacerbate this. Any quick thoughts on ’24 social engineering? I think it’s probably expect more. Is there anything someone can do different, or is there any way someone can think different?
Shahin Pirooz:
Yeah. I would say you’re spot on, there’s nothing new under the sun. We’ve been doing social engineering for as long as there was people throwing away data into dumpsters. We were dumpster diving, collecting information, calling into a company and pretending we were somebody to get credentials and then get into the network. We don’t dumpster dive anymore because everything’s electronic now. Well, I’m going to say everything, but there’s a lot of you out there still not. But mostly, everything is electronic.
So how do you get intel if everything is electronic, without breaking into the network first? The name social is your biggest hint. Social media is where we put everything about our daily lives and who we are, what our dog’s name is, what our aunt’s name is, what our kids’ names are. And a bad actor goes and figures out everything about you and then calls in pretending to be you to the help desk with all the intelligence. And they say, “What’s your mother’s maiden name?” Guess what, I just got that off social media. We’re good.
Kirstin Burke:
MGM gets breached.
Shahin Pirooz:
And MGM gets breached. It isn’t anything new, but the hackers have gotten smarter about how they do social engineering. And I think it’s important. And then bring in the AI conversation, which is now we’re creating deep fake communications which sound like they’re coming from someone, because they take their voice, the way they speak, the way they write, and are able to communicate making it sound like them. And they can actually model and sound bite the voice of somebody based on recordings they find, and so on and so forth. It’s security awareness, it’s implementing a second set of eyes, doing inspections, doing peer review.
If the CEO calls you and said, “I’m in Aruba and send me a check for $10,000 because I just bought a yacht,” go talk to somebody else. Don’t jump through the hoops. Go have a set of checks and balances that are controlling. If a vendor calls you and says we changed our account number, don’t just change the account number, have a set of checks and balances. Go and inspect. It’s security awareness.
Kirstin Burke:
Inspection.
Shahin Pirooz:
Yes, it’s security awareness training. Just like I said, we can’t create a security analyst that fills the role of a security analyst, but it can answer the frontline support. Same thing applies here. You can’t replace the human inspection, you have to have that discernment that says, “This doesn’t seem right. Our CEO never went off to Aruba and bought a yacht like this before.” If he does, then I can’t help you. But generally speaking, you got to have that discernment to say there’s something not right, something fishy, this doesn’t feel right. But don’t rely on one person, put policy in place and do security awareness training that says, if you’re getting anything that is out of normal operations, a second set of eyes has to inspect it. You have to have two people approve it, you can’t have a single person approve it.
Kirstin Burke:
Or something that we even do internally, we have a security council. Security committee. So something comes in, something looks weird, it gets shot over there. Anyone in the company’s invited to do it. What is this? And I’m sure the team gets more than they want or need, but at least I’d rather they get more than less.
Shahin Pirooz:
I love receiving email. What are you talking about?
Kirstin Burke:
Well, great advice. And one of the things you started talking to me about last year, probably midyear, Shahin and I are pontificating about all sorts of things technology, along with whiskey. And he starts talking about quantum computing. And I’m like, well, I watched Quantum Leap a long time ago. Quantum computing, I don’t know. I’m dating myself. What is that?
And so interesting to see towards year-end you hear the World Economic Forum last week, all of a sudden quantum computing’s top of mind. And it’s more as a caution. More as a, hey, this is coming. But you’re starting to hear people talking about post-quantum cryptography. And it’s like, oh my gosh, what is this? People are barely getting through understanding the acceleration that AI is bringing to the world, and now we’re starting to hear about quantum computing. Quickly, what is it? What do people need to worry about now? And what do people just need to sit back and watch evolve?
Shahin Pirooz:
Yeah. The best way to think about this, and this is not an accurate depiction of it, but it helps to give you some context. We’ve worked in a two-dimensional world in computing for the last 30 years, and quantum computing effectively takes us into a third dimension. And what it’s really doing is it’s giving us the ability to process data faster, process things faster, faster memory, more memory, larger size data. So it makes the computer that much faster, is the short of it. What’s the implication of that? The implication is that when it took us three weeks to crack a password on a regular computer, we can do it in 30 seconds now with quantum computing. The risk factor is now, those things that were unbreakable things, like we talk about 256 and 512 AES encryption, and nobody’s going to break it unless they have five mainframes running for three weeks. That’s not true anymore.
Quantum computing still is not mainstream. There’s a lot of people who are getting quantum ready. And what that really means is that the post-quantum algorithms for encryption, that’s where the cryptography comes from, are able to withstand the attacks from a quantum computer. That’s the concept. And here is the real fundamental underlying encryption. Today, our answer to protect our data and the regulatory concerns tell us, encrypt your data. Because if the bad actor gets it, it’s just garbage. The worry now is, let’s say it takes… Let’s just stretch it out. I don’t think it’ll take 10 years, but let’s say it takes 10 years for quantum computing to become a reality. All a bad actor has to do is take your encrypted data when they’re exfiltrating data. We all know that ransomware happens all the time, which means the hackers are getting in, they’re exfiltrating data. And the people who have their data encrypted are like, it’s okay, it’s encrypted, can’t do anything with it. But all they have to do is sit on it until that AES 256 encryption is a child’s play activity on a quantum computer.
And if we go back in time, I remember when we went to the first time I had a multicore processor and a multiprocessor multicore system, we ran some security tools against the entire active directory for the company I was working at to see if a hacker could actually hack the passwords. We were able to, in 48 hours, completely hack the entire security account database and get all the passwords for all of our users. I think there was only two users who had something like a 16 character password, but the rest of them were using eight character passwords. They’re Bob1 and Jenny2, and whatever it was. That was a moment in time. And we’re talking, to age myself, this was 1994. We’re talking a leap ahead of that functionality to where those passwords seemed really secure, and all of a sudden they weren’t. And it took us 48 hours to do 2,000 passwords in a security accounts database.
And we raised it up to the executive committee and said, “Look, we need to make our passwords harder, we need to make them more complicated. We need to make sure people aren’t using their names.” And that’s when the start of the, you need to make complex passwords really started, was people like us in the security space, we’re figuring out how to get past security. We now have the same issue, but not for passwords, we have issues on databases that are encrypted. We have issues on files that are encrypted. That encryption will not be a hindrance if the encryption algorithms are not post-quantum ready. That is the fundamental shift that I would say over the next year or two people need to start thinking the concern. The risk in ’24, you should be looking at technologies that are post-quantum ready now. Because if your data gets stolen and you’ve encrypted with something that is not post-quantum ready, post-quantum, that data is clear as day.
Kirstin Burke:
And I want to wrap up, because we’ve probably gone a little longer today than we usually do, but it’s intriguing. When I hear post-quantum ready, when I hear quantum computing isn’t here yet, but beware it’s coming. How can something be created that’s post-quantum ready now when quantum isn’t even here? How does that work?
Shahin Pirooz:
It exists, it’s just not mainstream. It’s not something that the average user, the consumer is going to get access to. It’s not that it doesn’t exist. There’s quantum chipsets that are able to process in what are called cupids, instead of bits. And they are available now, and the researchers and developers and all those are working with them. When we say post-quantum ready, it’s when quantum computing becomes mainstream.
Kirstin Burke:
Mainstream. Got it.
Shahin Pirooz:
It’s not that they’re creating concepts for… All of the companies that are in the cryptography space today have access to quantum computing to create quantum ready capability. If you’re interested in this space, give us a call, we’d love to talk to you about it. There’s some really interesting tech out there that takes, and the encryption algorithms themselves become swappable. They’re able to be ripped and replaced with quantum capable algorithms without having to redo your entire architecture.
There’s some very interesting technologies that are coming out today that address this problem head on, and are way ahead of the rest of the competition. And I would say, out of all the things coming out of the forecast for ’24, I would put my energy into post-quantum. Because we know that one out of two companies gets targeted and attacked. Out of those, three out of four of them get encrypted, which means that the data was exfiltrated and the hackers encrypted the data. And that means they have your data. Even if your data was encrypted, they have your data. So now apply that to this post-quantum, and that means that they have your data and at some point they’re going to be able to decrypt it.
Kirstin Burke:
Right.
Shahin Pirooz:
And eight out of 10 of those companies that were encrypted get hit more than once. So not only do they steal your data once, but they’re going to come back and get it again. All the rest of our security services help to protect and prevent that from happening. But you can’t assume you’re never going to get attacked. You can’t assume even with the best security, bad actors keep evolving and it’s really hard to stay ahead of them. So there will be data exfiltration in your future. Not maybe, there will be data taken out of your network. Be sure that you’re protected when quantum becomes a reality.
Kirstin Burke:
Awesome. Well, see, I learned something new again. Always. Thank you all. Thanks for sticking with us. If you have any trends that you’re thinking about that maybe we haven’t talked about, send them in chat or email us here, and we’d love to talk to them about you. Because we narrowed it down to those we thought were top. But obviously for each of you individually, I’m sure you’ve got your list of things that are going on this year as well. Thank you for joining us, Shahin. Thanks to all of you, and we’ll see you next month. Bye.
Shahin Pirooz:
Thanks everyone.