Iason Gabriel on Foundational Philosophical Questions in AI Alignment

In the contemporary practice of many scientific disciplines, questions of values, norms, and political thought rarely explicitly enter the picture. In the realm of AI alignment, however, the normative and technical come together in an important and inseparable way. How do we decide on an appropriate procedure for aligning AI systems to human values when there is disagreement over what constitutes a moral alignment procedure? Choosing any procedure or set of values with which to align AI brings its own normative and metaethical beliefs that will require close examination and reflection if we hope to succeed at alignment. Iason Gabriel, Senior Research Scientist at DeepMind, joins us on this episode of the AI Alignment Podcast to explore the interdependence of the normative and technical in AI alignment and to discuss his recent paper Artificial Intelligence, Values and Alignment.     Topics discussed in this episode include: -How moral philosophy and political theory are deeply related to AI alignment -The problem of dealing with a plurality of preferences and philosophical views in AI alignment -How the is-ought problem and metaethics fits into alignment  -What we should be aligning AI systems to -The importance of democratic solutions to questions of AI alignment  -The long reflection You can find the page for this podcast here: https://futureoflife.org/2020/09/03/iason-gabriel-on-foundational-philosophical-questions-in-ai-alignment/ Timestamps:  0:00 Intro 2:10 Why Iason wrote Artificial Intelligence, Values and Alignment 3:12 What AI alignment is 6:07 The technical and normative aspects of AI alignment 9:11 The normative being dependent on the technical 14:30 Coming up with an appropriate alignment procedure given the is-ought problem 31:15 What systems are subject to an alignment procedure? 39:55 What is it that we're trying to align AI systems to? 01:02:30 Single agent and multi agent alignment scenarios 01:27:00 What is the procedure for choosing which evaluative model(s) will be used to judge different alignment proposals 01:30:28 The long reflection 01:53:55 Where to follow and contact Iason This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Om Podcasten

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.