AI applications security puzzle
AI systems can be surprisingly easy to manipulate. With carefully crafted prompts, attackers can trick an AI system into revealing sensitive data, making unsafe decisions or even ignoring its own rules. And this is only one way to harm AI systems. As more companies add AI to their products, the attack surface grows. And hackers constantly find new ways to exploit it.
Together with you, Maryia Tuleika wants to explore examples of how AI systems fail in practice.
We will look at prompt manipulation and weak safeguards, and other issues that can turn a helpful AI assistant into a security risk. How easily can an attacker fingerprint an LLM? What is a zero click attack? Can a simple chatbot become a gateway to larger system problems?
The good news? You don’t need to reinvent the wheel to test AI. Strong system thinking, traditional testing techniques, critical mindset are already powerful tools for uncovering vulnerabilities. The same skills used to break and improve software (like exploratory testing, risk analysis, extensive logging and monitoring) can help make AI systems safer and more predictable.
How to register for an event, if you are not a member:

- Create an account.
Password advice: use the password generator for passwords or avoid dicitionary words ( even when using special characters).
- Check your email box, there is an email from ShiftSync. Click on a button in the email and activate your account.Â
- Go back to this page and click Attend.
- Now you are registered! ✨
Â
Login to the community
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.