This article is based on Lee Klarich’s brilliant talk at the AI Accelerator Summit in San Jose. As an AIAI member, you can enjoy the complete recording here. For more exclusive content, head to your membership dashboard.


Hey there! My name's Lee Klarich. I'm the Chief Product Officer for Palo Alto Networks, a role I've had in various forms for almost 20 years – since the company started. 

In that time, I've seen new technologies emerge that require security and enablement. However, I don't think I've seen anything adopted as abruptly as generative AI over the last 12 months. It's incredible.

Think about the amazing things that are being worked on – exciting new creative concepts, advancements in healthcare, and much more. Just last week, I was on a panel with someone from the Mayo Clinic discussing how AI will change medicine, transform how doctors will work, and even revolutionize prosthetics. 

We've got all sorts of cool stuff emerging, and I'm hoping you're creating “good AI bees” that positively pollinate rather than killer bees! But alas, I work in cybersecurity. So, the AI developments I pay attention to are how attackers exploit generative AI to make our lives more difficult. 

So, in this article, I'll discuss the rise of adversarial AI, how we're using AI to counter this challenge, and how we hope to enable you to do amazing things responsibly.

AI is expanding the horizons of what’s possible – for cyber threats

Let’s look at some examples of how adversaries use AI. 

Remember the SolarWinds attack of 2020? In case you don’t, let me remind you. The Russian state-sponsored group ATP29 compromised SolarWinds' source code for their IT monitoring app. So, when SolarWinds sent updates, their customers were compromised. The attackers systematically worked through US government orgs, system integrators, and cybersecurity companies – around 100 high-value targets.

The interesting part is that ATP29 could have targeted around 3,000 companies, but they were limited by human resources to only 100 serial attacks before being caught. Imagine if, instead of relying on people, they had used AI to automatically execute 3,000 parallel attacks. That's scary.

At Palo Alto, we're thinking hard about this. Attacks are speeding up – our experiments show ransomware attackers could use automation to carry out attacks four to five times faster than they do today. 

Attackers are already getting faster, thanks to automation tools – and we’re getting faster at blocking them. Just this morning, my team's researchers showed me how quickly we can use AI to anticipate what adversaries will do next. We do this proactively to retrain our security capabilities.

Despite everything that’s going on, I’m one of the rare optimists in the cybersecurity space. Most people dwell on the horrible threats, but I think AI provides more opportunity than risk – if we take advantage of it in the right way.

Better security with AI

Let me share a couple of examples of how we've used AI, which might spark ideas for how you could use it too. 

Reactive cybersecurity

For many years, whenever anyone in the cybersecurity industry uncovered a new attack, we'd generate a signature to detect and block it. This worked for known attacks but not zero-day or unseen attacks, since you can't have a signature for the unknown. There was always a “patient zero” in this model of attack prevention. 

The cycle was to detect attacks, create signatures, and push signatures out. We got faster and faster at doing this – from weeks with floppy disks in the 90s, down to minutes. But we were still using this detect-update-respond model. 

The problem is that AI makes it possible for nearly every attack to be a zero-day attack. So how do we deal with zero-day attacks enabled by AI?