Brain-like AGI and why it's Dangerous (with Steven Byrnes)

On this episode, Steven Byrnes joins me to discuss brain-like AGI safety. We discuss learning versus steering systems in the brain, the distinction between controlled AGI and social-instinct AGI, why brain-inspired approaches might be our most plausible route to AGI, and honesty in AI models. We also talk about how people can contribute to brain-like AGI safety and compare various AI safety strategies.   You can learn more about Steven's work at: https://sjbyrnes.com/agi.html   Timestamps:   00:00 Preview   00:54 Brain-like AGI Safety  13:16 Controlled AGI versus Social-instinct AGI   19:12 Learning from the brain   28:36 Why is brain-like AI the most likely path to AGI?   39:23 Honesty in AI models   44:02 How to help with brain-like AGI safety   53:36 AI traits with both positive and negative effects   01:02:44 Different AI safety strategies

Om Podcasten

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.