05: The Likelihood and Risks of Superintelligent Machines

Kurt Andersen speaks with computer scientist Stuart Russell about the risks of machines reaching superintelligence and advancing beyond human control. In order to avoid this, Russel believes, we need to start over with AI and build machines that are uncertain about what humans want. STUART RUSSELL is a computer scientist and professor at University of California Berkeley. He is the author, most recently, of Human Compatible: Artificial Intelligence and the Problem of Control. He has served as the Vice-Chair of the World Economic Forum’s Council on AI and Robotics and as an advisor to the United Nations on arms control. He is the author (with Peter Norvig) of the universally acclaimed textbook on AI, Artificial Intelligence: A Modern Approach.   A transcript of this episode is available at Aventine.org. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

Om Podcasten

Whether you’re aware of it or not, you are part of one of the most ambitious projects we as humans have ever attempted: Rebuilding the world, pretty much from the ground up, in order to switch from fossil fuels to clean energy sources. It’s a major undertaking, one that will require staggering financial investment and the success of technologies many people have never heard of. In this season of The World as You'll Know It, science journalist Arielle Duhaime-Ross goes deep inside the world of cutting-edge climate technologies and asks: How is this all going to work? The answers — from some of the world’s most innovative and audacious thinkers, builders and investors — reveal the promise, obstacles and tradeoffs of a new clean-energy landscape that will shape the way we live.