AI, LLMs and Security: How to Deal with the New Threats

The use of large language models (LLMs) has become widespread, but there are significant security risks associated with them. LLMs with millions or billions of parameters are complex and challenging to fully scrutinize, making them susceptible to exploitation by attackers who can find loopholes or vulnerabilities. On an episode of The New Stack Makers, Chris Pirillo, Tech Evangelist and Lance Seidman, Backend Engineer at Atomic Form discussed these security challenges, emphasizing the need for human oversight to protect AI systems.

Om Podcasten

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack