As AI and large language models (LLMs) rapidly transform industries, they also introduce new vulnerabilities that traditional cybersecurity methods can’t fully address—data leaks, non-compliance, intellectual property theft, and more. In fact, 94% of IT leaders have allocated budgets to safeguard AI in 2024, and this number is expected to rise significantly as AI and LLM adoption continues. The modern attack surface has evolved, making AI and LLMs prime targets for cyberattacks.
In this edition of the Cyber Risk Series, we’ll tackle the most pressing AI security challenges, explore the hidden risks in your AI and LLM workloads, and forecast the 2025 AI security landscape. This event will bring AI security to the forefront, empowering security leaders to defend against emerging threats.
Key topics:
- Full AI & LLM Workload Discovery
- AI Vulnerability Management
- Risk-Based Prioritization
- Compliance & Legal Protection
December 4, 2024
9:00 AM – 12:15 PM PT
Something went wrong. Please try again.
Don’t miss the opportunity to learn from industry experts. Register now.
Featuring
Graham Cluley
Smashing Security
Sumedh Thakar
President and CEO
Qualys
Jessie Jamieson, PhD
Senior Cyber Risk Engineer
CERT Division
CMU SEI
Steve Wilson
Chief Product Officer
Exabeam
OWASP Project Lead
Preeti Ravindra
Data, Math & Software for Security
Joe Petrocelli
VP Product Management
Qualys
Agenda
9:00 AM PT
Welcome to the Cyber Risk Series!
Join us as we engage in thoughtful discussion and get expert insight on the importance of securing AI / LLM workloads.
Graham Cluley
Smashing Security
9:10 AM PT
Redefining Risk and Resilience in a New Cyber Era
In a time when AI and LLMs are transforming both opportunities and threat landscapes, Sumedh will examine how CISOs and cybersecurity leaders can address the emerging complexities of AI security. Attendees will gain insights into risk-informed approaches that allow organizations to harness AI’s potential while safeguarding against evolving vulnerabilities.
Sumedh Thakar
President and CEO
Qualys
9:30 AM PT
Chatbots Breaking Bad: Unmasking the Risks of LLMs
As AI and large language models (LLMs) become integral to business operations, understanding their unique risks is critical. In this session, I’ll draw from my experience building production LLM systems at Exabeam, insights from my work with OWASP, and lessons from my award-winning O’Reilly book to uncover the vulnerabilities lurking in today’s generative AI. We’ll examine key security gaps and discuss actionable strategies to mitigate threats in an evolving landscape.
Steve Wilson
Chief Product Officer
Exabeam
OWASP Project Lead
10:00 AM PT
Becoming More Comfortable with Risk-Informed Secure AI
Emergent technologies like generative AI can sometimes take security professionals out of their comfort zone and challenge preconceived notions about what it means to secure a system or capability. The new challenges that come with securing AI have also forced us to revisit risk and resilience in a threat landscape that has quickly shifted into novel attack spaces.
Effectively managing enterprise cybersecurity risks has historically been facilitated by the adoption of robust risk management frameworks, tools, and processes that directly link risks to actions. For this talk, we will illustrate how the concepts that have traditionally afforded us the ability to mitigate and respond to risk through security are the same concepts we can apply to secure capabilities enabled by emergent technologies, including AI. Along the way, we will examine what it is that makes us uncomfortable with AI and discuss concrete steps to take that will make us more comfortable with deploying these capabilities confidently and securely.
Jessie Jamieson, PhD
Senior Cyber Risk Engineer
CERT Division
CMU SEI
10:30 AM PT
Risk Mitigation for AI with Secure Development Lifecycle
The session provides actionable insights for organizations looking to implement robust security practices in their AI development practices while balancing innovation with risk mitigation. We explore integrating AI development and security lifecycles, offering a practical framework for risk management. We examine how secure development lifecycle (SDL) principles can be adapted for AI systems. The discussion covers distinct risk considerations from both AI model providers’ and consumers’ perspectives. We’ll analyze appropriate controls and risk mitigation strategies at different stages.
Preeti Ravindra
Data, Math & Software for Security
11:30 AM PT
Navigating Security Challenges of Large Language Models with AI Asset Visibility and Model Scanning
As organizations adopt LLMs rapidly, security challenges arise, especially when development teams deploy these models without notifying security teams. Total AI enhances visibility, offers proactive scanning, and categorizes AI vulnerabilities, helping organizations secure their infrastructures and manage risks effectively. A demo showcases how users can manage AI assets and address vulnerabilities.
Joe Petrocelli
VP Product Management
Qualys