Katja Grace on the Largest Survey of AI Researchers

Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, and her thoughts on how we can mitigate AI risk. Find more on Katja's work at https://aiimpacts.org/. Timestamps: 0:20 AI Impacts surveys 18:11 What AI will look like in 20 years 22:43 Experts’ extinction risk predictions 29:35 Opinions on slowing down AI development 31:25 AI “arms races” 34:00 AI risk areas with the most agreement 40:41 Do “high hopes and dire concerns” go hand-in-hand? 42:00 Intelligence explosions 45:37 Discontinuous progress 49:43 Impacts of AI crossing the human-level intelligence threshold 59:39 What does AI learn from human culture? 1:02:59 AI scaling 1:05:04 What should we do?

Om Podcasten

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.