Max Tegmark - The Lynchpin Factors to Achieving AGI Governance [AI Safety Connect, Episode 1]

This is an interview with Max Tegmark, MIT professor, Founder of the Future of Humanity Institute, and author of Life 3.0. This interview was recorded on-site at AI Safety Connect 2025, a side event from the AI Action Summit in Paris. See the full article from this episode: https://danfaggella.com/tegmark1 Listen to the full podcast episode: https://youtu.be/yQ2fDEQ4Ol0 This episode referred to the following other essays and resources: -- Max's A.G.I. Framework / "Keep the Future Human"...

Om Podcasten

What should be the trajectory of intelligence beyond humanity?The Trajectory pull covers realpolitik on artificial general intelligence and the posthuman transition - by asking tech, policy, and AI research leaders the hard questions about what's after man, and how we should define and create a worthy successor (danfaggella.com/worthy). Hosted by Daniel Faggella.