Instagram and the Dangers of Non-Transparent AI: Mechanistic Interpretability

In this episode, we explore the AI concept of mechanistic interpretability - understanding how and why an AI model makes certain decisions. Using Instagram's machine learning-based feed ranking algorithm as an example, we discuss the dangers of algorithms that operate as black boxes. When the mechanics behind AI systems are opaque, issues like bias can go undetected. Through explaining ideas like transparency in AI and analyzing a case study on potential racial bias, we underscores why interpretable AI matters for fairness and accountability. This podcast aims to make complex AI topics approachable, relating them to real-world impacts. Join us as we navigate the fascinating intersection of technology and ethics. Want more AI Infos for Beginners? 📧 ⁠Join our Newsletter⁠! This podcast was generated with the help of artificial intelligence. We do fact check with human eyes, but there might still be hallucinations in the output. Music credit: "Modern Situations by Unicorn Heads"

Om Podcasten

"A Beginner's Guide to AI" makes the complex world of Artificial Intelligence accessible to all. Each episode breaks down a new AI concept into everyday language, tying it to real-world applications and featuring insights from industry experts. Ideal for novices, tech enthusiasts, and the simply curious, this podcast transforms AI learning into an engaging, digestible journey. Join us as we take the first steps into AI! There are 3 episode formats: AI generated, interviews with AI experts & my thoughts. Want to get your AI going? Get in contact: dietmar@argo.berlin