AI and Nuclear Weapons - Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz

On this month’s podcast, Ariel spoke with Paul Scharre and Mike Horowitz from the Center for a New American Security about the role of automation in the nuclear sphere, and how the proliferation of AI technologies could change nuclear posturing and the effectiveness of deterrence. Paul is a former Pentagon policy official, and the author of Army of None: Autonomous Weapons in the Future of War. Mike Horowitz is professor of political science at the University of Pennsylvania, and the author of The Diffusion of Military Power: Causes and Consequences for International Politics. Topics discussed in this episode include: The sophisticated military robots developed by Soviets during the Cold War How technology shapes human decision-making in war “Automation bias” and why having a “human in the loop” is much trickier than it sounds The United States’ stance on automation with nuclear weapons Why weaker countries might have more incentive to build AI into warfare How the US and Russia perceive first-strike capabilities “Deep fakes” and other ways AI could sow instability and provoke crisis The multipolar nuclear world of US, Russia, China, India, Pakistan, and North Korea The perceived obstacles to reducing nuclear arsenals

Om Podcasten

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.