Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI

It's well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn't fully capture what we want.  Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of the dimensions of freedom afforded by the misspecified objective and set them to extreme values. This may allow for better optimization on the goals in the objective function, but can have catastrophic consequences for human preferences and values the system fails to consider. Is it possible for misalignment to also occur between the model being trained and the objective function used for training? The answer looks like yes. Evan Hubinger from the Machine Intelligence Research Institute joins us on this episode of the AI Alignment Podcast to discuss how to ensure alignment between a model being trained and the objective function used to train it, as well as to evaluate three proposals for building safe advanced AI.  Topics discussed in this episode include: -Inner and outer alignment -How and why inner alignment can fail -Training competitiveness and performance competitiveness -Evaluating imitative amplification, AI safety via debate, and microscope AI You can find the page for this podcast here: https://futureoflife.org/2020/07/01/evan-hubinger-on-inner-alignment-outer-alignment-and-proposals-for-building-safe-advanced-ai/ Timestamps:  0:00 Intro  2:07 How Evan got into AI alignment research 4:42 What is AI alignment? 7:30 How Evan approaches AI alignment 13:05 What are inner alignment and outer alignment? 24:23 Gradient descent 36:30 Testing for inner alignment 38:38 Wrapping up on outer alignment 44:24 Why is inner alignment a priority? 45:30 How inner alignment fails 01:11:12 Training competitiveness and performance competitiveness 01:16:17 Evaluating proposals for building safe and advanced AI via inner and outer alignment, as well as training and performance competitiveness 01:17:30 Imitative amplification 01:23:00 AI safety via debate 01:26:32 Microscope AI 01:30:19 AGI timelines and humanity's prospects for succeeding in AI alignment 01:34:45 Where to follow Evan and find more of his work This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Om Podcasten

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.