As enterprises race to integrate generative AI into their applications and workflows, adversaries are finding new ways to exploit language models through prompt injection attacks to leak sensitive data and bypass security controls.
But how do these attacks actually work, and what can organizations do to defend their genAI applications against them?
Join us for an exclusive deep dive with Rob Truesdell, Chief Product Officer at Pangea, as we explore the evolving landscape of prompt injection threats and the latest strategies to secure genAI applications.
Can't make it on the day? Register above and receive a recorded version of the session straight to your inbox.
This session will cover:
- How prompt injection works – A breakdown of direct and indirect techniques, with real-world attack examples and data.
- What LLM providers are doing – A look at native defenses built into top models to counteract prompt injection risks.
- The insider vs. outsider threat – How adversaries both inside and outside an organization can manipulate genAI models.
- Risk mitigation strategies – Engineering and security best practices to prevent, detect, and respond to prompt injection attempts.
- Measuring effectiveness – How to evaluate the efficacy of prompt injection detection mechanisms.
This live session is a must-attend for security leaders, AI engineers, and product teams looking to understand and mitigate the risks of AI-powered applications in an increasingly adversarial landscape.