Episode 49: AGI Alignment and Safety

Is Elon Musk right that Artificial General Intelligence (AGI) research is like 'summoning the demon' and should be regulated? In episodes 48 and 49, we discussed how our genes 'align' our interests with their own utilizing carrots and sticks (pleasure/pain) or attention and perception. If our genes can create a General Intelligence (i.e. Universal Explainer) alignment and safety 'program' for us, what's to stop us from doing that to future Artificial General Intelligences (AGIs) that we create?  But even if we can, should we? "I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon." --Elon Musk

Om Podcasten

A podcast that explores the unseen and surprising connections between nearly everything, with special emphasis on intelligence and the search for Artificial General Intelligence (AGI) through the lens of Karl Popper's Theory of Knowledge. David Deutsch argued that Quantum Mechanics, Darwinian Evolution, Karl Popper's Theory of Knowledge, and Computational Theory (aka "The Four Strands") represent an early 'theory of everything' be it science, philosophy, computation, religion, politics, or art. So we explore everything. Support us on Patreon: https://www.patreon.com/brucenielson/membership