Search
Close this search box.

Defending Against AI-assisted Attacks: Ethan Schmertzler, Dispel, ICS Pulse Podcast

Courtesy: CFE Media and Technology

In recent years, the masses have gotten their first real taste of artificial intelligence (AI), thanks to tools like ChatGPT. But both attackers and defenders have been using AI in cybersecurity for some time. So what does this mean for the industry, and what should manufacturers and critical infrastructure be preparing for? And how much of a threat are AI-assisted attacks?

The ICS Pulse Podcast recently sat down with Ethan Schmertzler of Dispel to talk about AI’s impact on industrial cybersecurity and moving target defense. The following has been edited for clarity.

To listen to the complete podcast, click here. The read part one of the interview, click here.

ICS Pulse: With AI tools like ChatGPT, everybody can use them, whether you want to use them to help you with your marketing report or to try to write code. How has this democratization, giving it to the masses, impacted the OT/ICS cybersecurity industry?

Ethan Schmertzler: I think there’s an important question about the democratization of it and who’s actually giving it away. The algorithms themselves are expensive to make, but what’s incredibly expensive is the training that that algorithm then does against bodies of literature. These are known as the weights, when you have an AI model that’s being trained against something. By itself, if you just have an untrained AI, it’s like an infant. It doesn’t actually know how to deal with the world. You have to train it against information. Once it has that body of data, that’s called the weights. An AI with the weights is what’s really powerful.

The reason why I emphasize that is because, in order to do that training, that requires a lot of processors and a lot of processing time, which is millions of dollars. So when you look at what’s been made available — tools like OpenAI are publicly available to access, to interface with — but the weights themselves and the algorithm itself are actually privately held. What you saw Meta do the other week, where they released their algorithm, plus that weight analysis that they’ve had with it, that’s now in the wild. The cost of doing that analysis is now publicly available. The reason that distinction is really important is because once someone has both the algorithm and then the back-end system behind it, those weights, they can now use that for whatever purpose they want. That won’t necessarily change.

Humans have done nefarious stuff on the cesspool of the internet for forever. This might just accelerate the speed at which people are doing that. Just because, though, you can produce this information, doesn’t mean that we should just say, “Oh, well, it’s out in the wild. People were already putting bad stuff out there on the internet.” It doesn’t absolve us of the responsibility for trying to curtail that information being out there, and disinformation being pushed out there. Just because there might be a torrent of it, that doesn’t mean that we have to give up.

ICSP: Let’s shift gears here to managed threat detection. Could you walk us through what that is and how cybersecurity companies can bolster their strategy to prepare for AI?

Schmertzler: Threat detection is the idea of trying to identify when — it’s in the name — you’re trying to identify a threat inside of your environment. There are a couple of classical ways that we’ve done that. The first one, if you all remember antivirus from the early 2000s, those were what were called signature-based defenses, where we’d know what the code looked like in an identical format, like the way your antibodies work in your body. If it’s at all different, you’re going to get the flu again. Signature-based defenses are kind of good because they’ll basically get most of the run-of-the-mill stuff, but anything at all novel, they’re going to miss. They used to. Then, we stepped up from there, and we talked about heuristic-based defenses, which would look for behavioral patterns.

The problem, what you’re going to see with AI, is that the kind of attacks might become more sophisticated in terms of creating these permutations. It’s essentially like training an immune system against a virus. But then you’ve got some mad scientist who’s creating thousands and thousands and thousands of iterations of that virus so that you might get sick once and think, “Oh, well, I’ve got an immunity to that.” But now I’m being bombarded by things that are just slightly different enough.

If signature-based defenses, and potentially heuristic-based defenses, might be challenged by that, then we start relying on threat hunting, which is still to this day a somewhat human element. It requires that you take a more sophisticated look at, “If this software is acting in this certain way, why is it acting in this certain way? Are there things that we need to start having more cognition about?” It requires, at this point, human security analysts looking at it.

It’s probable that, as you train AIs to look over the shoulder of a human being making those judgment calls, you might get better and better and better at that. Or the machine might get better at that, because it’s able to say, “I cannot gain judgment, but gain a statistically probable chance of saying, ‘In these circumstances, this should not be happening, or this should be happening.'” To bring this all together, there’s a couple of different tools that are available in the threat hunting toolkit. As we can train AIs to have that human-level judgment, I think that will help protect us against the AI on the other side trying to break down the door. It’s the same way we’ve dealt with quantum encryption.

ICSP: If you have an AI-assisted attacker, is the AI-assisted defender at a disadvantage, or are they still able to maintain an operational advantage?

Schmertzler: They’ll be most likely at a disadvantage. If you have an AI-assisted attacker, they’re going to be at an advantage against a defender. The reason why is twofold. One, the attacker has the point of opportunity. If you’re using a traditional static defense model — so you have firewalls, privileged access management tools, identity access management tools, different heuristic-based defenses, threat hunting, all that sort of stuff — at the end of the day, if the attacker knows where your network is and they know who the people are that they’re going against, then they have all the time in the world to keep working the problem. That hasn’t changed. Before AI was there, that was still true. The fact that you have AI just, at this point, makes the human side of the attack — in other words, the cheap way of getting in — way easier.

An AI-assisted defender might teach you how to talk about these sorts of things with your employees. But until you get better, say, certificate-based protection against emails coming through and filtering that out, that’s not going to stop a human being from clicking efficiently right now. So it’s going to give the advantage to the attackers. It’s the reason why you’re seeing government frameworks get ahead of this and start arguing, especially in the defense industrial base, that static-based defenses and fighting that defender versus offensive war is just a losing battle. You need to make the cost of going against infrastructure significantly higher. You do things like that by creating those compostable systems, those moving target defense networks, those shifting dynamic proxies. Basically, the whole point of this is to make it more expensive — AI or not — as you go against a set of infrastructure.

ICSP: In terms of AI-driven threats, what are some novel or unexpected attack vectors that organizations should be prepared for?

Schmertzler: The most interesting one that would get under the radar would be going old school, so combining an AI with a robotic arm that can hand-write notes and start sending written notes to people as a way of conducting really sophisticated phishing attacks against people. Establishing relationships with someone, saying, “It was nice to meet you at so-and-so conference,” or, “I’m sorry we didn’t get to connect.” Coming up with novel ways to go after people that way, to build trust.

I think, stepping out from that one specific circumstance, what you’re doing is you’re trying to outsource the attack to something which is relatively inexpensive. In this case, it’s an AI, but coming up with playbooks that you can use to build training and trust and rapport with someone, to then use that to exploit that rapport, to then get access to an environment.

ICSP: I want to have a longer conversation with you about this, but you’ve mentioned moving target defense a few times. Can you give us an idea of how can you use moving target defense to help guard against AI? What can it do for you in that situation?

Schmertzler: Moving target defense is an evolution of network topology. What does that actually mean? People have been very familiar about “You have to encrypt your data.” If I’m sending my information to the bank, I should encrypt it so people can’t read what I’m sending them. That makes a lot of sense. What encryption doesn’t do is it doesn’t do anything to obscure the fact that I am communicating with my bank. Anyone who looks at the network traffic knows that I’m talking to my bank.

Another example would be when corporations have internal networks, they often buy a block of internet space, of IP space, so that any traffic going to that block of IP space is known to belong to this bank or this institution or this industrial control system. Often, those IP addresses don’t change. Once I know where that IP block is — and you can look these things up — it is trivial to go after them. You get to now work that problem. You can, in fact, look online. There are websites that are essentially search engines where you can say, “I’m looking for this kind of industrial control system. Show me all the ones that have been found on the internet.” You can just look at that. Those are essentially free databases, which is dumb because that means that you’ve now taken the whole element of the hardness of, “Hey, go find my network,” you’ve taken all that difficulty away. An attacker has that data.

The point of moving target defense networks, which used to be called reconnaissance resistant networks, which really gets to their core, is that they are designed to make that first step, that reconnaissance stage, really, really expensive. Even if someone does gain target acquisition, I do not know that this IP address is associated with this organization. In 24 hours and 12 hours and six hours, that IP address is gone. It’s not just that we’re doing IP switching on the same box. That physical box you were connected to, the virtual infrastructure you were going to, is gone. It’s been spun up someplace else. There’s no way to identify that, “Hey, just because I’m connecting with this IP address doesn’t mean that I’m sending information to my bank.” It can be that I’m sitting at home watching Netflix. There’s no way for an attacker to get that kind of intelligence from that front-facing door, and it changes all the time.

In real life, because moving target defense networks have been deployed for the last eight years now, roughly 60% of all the hops grown in the United States, the farms that run them, the irrigation systems for them, are run on moving target defense networks. Water systems for a lot of the major cities in the United States run on moving target defense networks. Oil and gas systems run on them. The reason why is because it raises the cost of successfully going after infrastructure. The cost savings are huge to organizations because they’re no longer always under attack. They don’t have to always be defending so aggressively all the time. It doesn’t mean you get away from those other technologies or the other security suite. You absolutely need them. But it gives you breathing room.

ICSP: When we talked about this before, you compared the defense aspect of it to our nuclear arsenal. We shouldn’t know where these things are all the time. We want to keep moving these things so someone can’t attack them.

Schmertzler: Yeah. It’s exactly the concept of saying, “Let’s stop having castles that are on the tops of hills, and let’s put them into nuclear submarines, which you can hide underneath waves.” I think another way of thinking about it is it’s like being on a battlefield. You’re absolutely going to still want body armor and tanks and artillery. But imagine if you had an invisibility cloak while you’re out there, too. No one’s going to say, “Oh, we don’t want that.” I mean, we already push camouflage as much as we can. We should be doing at least camouflage on the internet as opposed to running around in red coats.

YOU MAY ALSO LIKE

GET ON THE BEAT

 

Keep your finger on the pulse of top industry news

RECENT NEWS
HACKS & ATTACKS
RESOURCES