Stephen Batchelor on Awakening, Embracing Existential Risk, and Secular Buddhism

Stephen Batchelor, a Secular Buddhist teacher and former monk, joins the FLI Podcast to discuss the project of awakening, the facets of human nature which contribute to extinction risk, and how we might better embrace existential threats.  Topics discussed in this episode include: -The projects of awakening and growing the wisdom with which to manage technologies -What might be possible of embarking on the project of waking up -Facets of human nature that contribute to existential risk -The dangers of the problem solving mindset -Improving the effective altruism and existential risk communities You can find the page for this podcast here: https://futureoflife.org/2020/10/15/stephen-batchelor-on-awakening-embracing-existential-risk-and-secular-buddhism/ Timestamps:  0:00 Intro 3:40 Albert Einstein and the quest for awakening 8:45 Non-self, emptiness, and non-duality 25:48 Stephen's conception of awakening, and making the wise more powerful vs the powerful more wise 33:32 The importance of insight 49:45 The present moment, creativity, and suffering/pain/dukkha 58:44 Stephen's article, Embracing Extinction 1:04:48 The dangers of the problem solving mindset 1:26:12 Improving the effective altruism and existential risk communities 1:37:30 Where to find and follow Stephen This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Om Podcasten

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.