Workshop Hall
Guarding the Genie: Securing LLM and Building Trust
Large Language Models (LLMs) have become indispensable tools across industries, but their immense power also presents significant security risks. This talk focuses on the OWASP Top 10 for LLMs, a critical framework for identifying and mitigating the most prevalent threats to these systems. We will delve into specific attack vectors such as prompt injection, data poisoning, model theft, and privacy breaches, providing real-world examples and practical defense strategies. By understanding these vulnerabilities and implementing robust countermeasures, organizations can build secure and trustworthy LLM applications. This talk aims to equip attendees with the knowledge and tools to protect their LLMs and LLM powered applications from the most dangerous threats.
Machine Learning
Deep Learning