Peter Railton on Moral Learning and Metaethics in AI Systems

From a young age, humans are capable of developing moral competency and autonomy through experience. We begin life by constructing sophisticated moral representations of the world that allow for us to successfully navigate our way through complex social situations with sensitivity to morally relevant information and variables. This capacity for moral learning allows us to solve open-ended problems with other persons who may hold complex beliefs and preferences. As AI systems become increasingly autonomous and active in social situations involving human and non-human agents, AI moral competency via the capacity for moral learning will become more and more critical. On this episode of the AI Alignment Podcast, Peter Railton joins us to discuss the potential role of moral learning and moral epistemology in AI systems, as well as his views on metaethics. Topics discussed in this episode include: -Moral epistemology -The potential relevance of metaethics to AI alignment -The importance of moral learning in AI systems -Peter Railton's, Derek Parfit's, and Peter Singer's metaethical views You can find the page for this podcast here: https://futureoflife.org/2020/08/18/peter-railton-on-moral-learning-and-metaethics-in-ai-systems/ Timestamps:  0:00 Intro 3:05 Does metaethics matter for AI alignment? 22:49 Long-reflection considerations 26:05 Moral learning in humans 35:07 The need for moral learning in artificial intelligence 53:57 Peter Railton's views on metaethics and his discussions with Derek Parfit 1:38:50 The need for engagement between philosophers and the AI alignment community 1:40:37 Where to find Peter's work This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Om Podcasten

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.