Webinar

Securing LLMs: Learning from OWASP Guides

Test AutomationTest ManagementTest Strategy

AI systems can feel like black boxes. But when you test them like any other software, surprising weaknesses start to surface.

What if someone could trick your LLM into revealing sensitive data, bypassing safeguards, or making decisions it was never meant to make?

In this session, we will explore how the principles behind OWASP and its widely recognized security guidance can help you better understand and mitigate risks in LLM-powered applications.

This is not a theoretical lecture. You will see real-world examples drawn from news stories and ongoing projects, illustrating how vulnerabilities in AI systems actually play out. Through practical demonstrations and original cartoons, we will break down complex risks into clear, memorable lessons.

What You Will Learn

Large Language Models introduce new attack surfaces, but the underlying risks are often familiar. During this webinar, we will examine:

  • How poorly crafted prompts can become entry points for attackers
  • How weak safeguards allow sensitive information to leak
  • How biased training data can create hidden security exposure
  • How lack of logging and monitoring makes AI failures harder to detect
  • How traditional testing skills remain powerful tools for AI security

You will leave with practical strategies to strengthen your LLM integrations and a structured way to think about AI risks.

Why This Matters

Security issues in AI systems rarely appear dramatic at first. Small gaps, unclear boundaries, or misplaced trust in model behavior can quietly grow into serious problems.

By applying structured security thinking and established risk models, teams can move from reactive fixes to proactive prevention.

Key Takeaways

  1. AI security is continuous. LLM risks evolve as attackers adapt. Ongoing testing and review are essential.
  2. Traditional testing works. Exploratory testing, risk analysis, and structured thinking remain effective.
  3. System awareness is critical. Understanding how AI interacts with the rest of your application helps close security gaps.

Register Now

Join us to gain practical tools, a clearer risk framework, and a grounded perspective on securing LLM applications.

You do not need new tools to start improving AI security. You need the right mindset and a structured approach.

About The Author
Maryia Tuleika

Maryia Tuleika

A software testing and quality engineering professional with hands-on experience in securing AI-driven systems and LLM integrations. She combines strong system thinking with practical testing expertise to uncover real-world risks in modern applications. Maryia is passionate about translating complex security concepts into clear, actionable insights, often using visual storytelling and cartoons to make AI risks easier to understand.
About The Author
Maryia Tuleika

Maryia Tuleika

A software testing and quality engineering professional with hands-on experience in securing AI-driven systems and LLM integrations. She combines strong system thinking with practical testing expertise to uncover real-world risks in modern applications. Maryia is passionate about translating complex security concepts into clear, actionable insights, often using visual storytelling and cartoons to make AI risks easier to understand.

Register now

Thursday, March 19, 8:00 AM EST / 14:00 CET

Share this article