Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety.  Topics discussed in this episode include: -Roman's results on the unexplainability, incomprehensibility, and uncontrollability of AI -The relationship between AI safety, control, and alignment -Virtual worlds as a proposal for solving multi-multi alignment -AI security You can find the page for this podcast here: https://futureoflife.org/2021/03/19/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/ You can find FLI's three new policy focused job postings here: https://futureoflife.org/job-postings/ Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro  2:35 Roman's primary research interests  4:09 How theoretical proofs help AI safety research  6:23 How impossibility results constrain computer science systems 10:18 The inability to tell if arbitrary code is friendly or unfriendly  12:06 Impossibility results clarify what we can do  14:19 Roman's results on unexplainability and incomprehensibility  22:34 Focusing on comprehensibility  26:17 Roman's results on uncontrollability  28:33 Alignment as a subset of safety and control  30:48 The relationship between unexplainability, incomprehensibility, and uncontrollability with each other and with AI alignment  33:40 What does it mean to solve AI safety?  34:19 What do the impossibility results really mean?  37:07 Virtual worlds and AI alignment  49:55 AI security and malevolent agents  53:00 Air gapping, boxing, and other security methods  58:43 Some examples of historical failures of AI systems and what we can learn from them  1:01:20 Clarifying impossibility results 1:06 55 Examples of systems failing and what these demonstrate about AI  1:08:20 Are oracles a valid approach to AI safety?  1:10:30 Roman's final thoughts This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Om Podcasten

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.