AIAP: On DeepMind, AI Safety, and Recursive Reward Modeling with Jan Leike

Jan Leike is a senior research scientist who leads the agent alignment team at DeepMind. His is one of three teams within their technical AGI group; each team focuses on different aspects of ensuring advanced AI systems are aligned and beneficial. Jan's journey in the field of AI has taken him from a PhD on a theoretical reinforcement learning agent called AIXI to empirical AI safety research focused on recursive reward modeling. This conversation explores his movement from theoretical to empirical AI safety research — why empirical safety research is important and how this has lead him to his work on recursive reward modeling. We also discuss research directions he's optimistic will lead to safely scalable systems, more facets of his own thinking, and other work being done at DeepMind.  Topics discussed in this episode include: -Theoretical and empirical AI safety research -Jan's and DeepMind's approaches to AI safety -Jan's work and thoughts on recursive reward modeling -AI safety benchmarking at DeepMind -The potential modularity of AGI -Comments on the cultural and intellectual differences between the AI safety and mainstream AI communities -Joining the DeepMind safety team You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/16/ai-alignment-podcast-on-deepmind-ai-safety-and-recursive-reward-modeling-with-jan-leike/ Timestamps:  0:00 intro 2:15 Jan's intellectual journey in computer science to AI safety 7:35 Transitioning from theoretical to empirical research 11:25 Jan's and DeepMind's approach to AI safety 17:23 Recursive reward modeling 29:26 Experimenting with recursive reward modeling 32:42 How recursive reward modeling serves AI safety 34:55 Pessimism about recursive reward modeling 38:35 How this research direction fits in the safety landscape 42:10 Can deep reinforcement learning get us to AGI? 42:50 How modular will AGI be? 44:25 Efforts at DeepMind for AI safety benchmarking 49:30 Differences between the AI safety and mainstream AI communities 55:15 Most exciting piece of empirical safety work in the next 5 years 56:35 Joining the DeepMind safety team

Om Podcasten

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.