Special: Jaan Tallinn on Pausing Giant AI Experiments

On this special episode of the podcast, Jaan Tallinn talks with Nathan Labenz about Jaan's model of AI risk, the future of AI development, and pausing giant AI experiments. Timestamps: 0:00 Nathan introduces Jaan 4:22 AI safety and Future of Life Institute 5:55 Jaan's first meeting with Eliezer Yudkowsky 12:04 Future of AI evolution 14:58 Jaan's investments in AI companies 23:06 The emerging danger paradigm 26:53 Economic transformation with AI 32:31 AI supervising itself 34:06 Language models and validation 38:49 Lack of insight into evolutionary selection process 41:56 Current estimate for life-ending catastrophe 44:52 Inverse scaling law 53:03 Our luck given the softness of language models 55:07 Future of language models 59:43 The Moore's law of mad science 1:01:45 GPT-5 type project 1:07:43 The AI race dynamics 1:09:43 AI alignment with the latest models 1:13:14 AI research investment and safety 1:19:43 What a six-month pause buys us 1:25:44 AI passing the Turing Test 1:28:16 AI safety and risk 1:32:01 Responsible AI development. 1:40:03 Neuralink implant technology

Om Podcasten

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.