Should we pause AI development until we're sure we can do it safely? (with Joep Meindertsma)

Read the full transcript here. (https://podcast.clearerthinking.org/episode/207/#transcript) • Should we pause AI development? What might it mean for an AI system to be "provably" safe? Are our current AI systems provably unsafe ? What makes AI especially dangerous relative to other modern technologies? Or are the risks from AI overblown? What are the arguments in favor of not pausing — or perhaps even accelerating — AI progress? What is the public perception of AI risks? What steps have governments taken to migitate AI risks? If thoughtful, prudent, cautious actors pause their AI development, won't bad actors still keep going? To what extent are people emotionally invested in this topic? What should we think of AI researchers who agree that AI poses very great risks and yet continue to work on building and improving AI technologies? Should we attempt to centralize AI development? • Joep Meindertsma is a database engineer and tech entrepreneur from the Netherlands. He co-founded the open source e-democracy platform Argu (https://argu.co/), which aimed to get people involved in decision-making. Currently, he is the CEO of Ontola.io (https://ontola.io), a software development firm from the Netherlands that aims to give people more control over their data; and he is also working on a specification and implementation for modeling and exchanging data called Atomic Data (https://docs.atomicdata.dev/). In 2023, after spending several years reading about AI safety and deciding to dedicate most of his time towards preventing AI catastrophe, he founded PauseAI (https://pauseai.info/) and began actively lobbying for slowing down AI development. He's now trying to grow PauseAI and get more people in action. Learn more about him on his GitHub page (https://github.com/joepio). • Staff • Spencer Greenberg (https://www.spencergreenberg.com/) — Host / Director • Josh Castle (mailto:joshrcastle@gmail.com) — Producer • Ryan Kessler (https://tone.support/) — Audio Engineer • Uri Bram (https://uribram.com/) — Factotum • WeAmplify (https://www.weamplify.info/) — Transcriptionists • Alexandria D. — Research and Special Projects Assistant • Music • Broke for Free (https://freemusicarchive.org/music/Broke_For_Free/Something_EP/Broke_For_Free_-_Something_EP_-_05_Something_Elated) • Josh Woodward (https://www.joshwoodward.com/song/AlreadyThere) • Lee Rosevere (https://archive.org/details/MusicForPodcasts04/Lee+Rosevere+-+Music+for+Podcasts+4+-+11+Keeping+Stuff+Together.flac) • Quiet Music for Tiny Robots (https://www.freemusicarchive.org/music/Quiet_Music_for_Tiny_Robots/The_February_Album/05_Tiny_Robot_Armies) • wowamusic (https://gamesounds.xyz/?dir=wowamusic) • zapsplat.com (https://www.zapsplat.com/music/summer-haze-slow-chill-out-house-track-with-a-modern-pop-feel-warm-piano-chords-underpin-the-track-with-warm-pads-and-a-repetitive-synth-arpeggio/) • Affiliates • Clearer Thinking (https://www.clearerthinking.org/) • GuidedTrack (https://guidedtrack.com/) • Mind Ease (https://mindease.io/) • Positly (https://positly.com/) • UpLift (https://www.uplift.app/) [Read more: https://podcast.clearerthinking.org/episode/207/joep-meindertsma-should-we-pause-ai-development-until-we-re-sure-we-can-do-it-safely]

Om Podcasten

Clearer Thinking is a podcast about ideas that truly matter. If you enjoy learning about powerful, practical concepts and frameworks, wish you had more deep, intellectual conversations in your life, or are looking for non-BS self-improvement, then we think you'll love this podcast! Each week we invite a brilliant guest to bring four important ideas to discuss for an in-depth conversation. Topics include psychology, society, behavior change, philosophy, science, artificial intelligence, math, economics, self-help, mental health, and technology. We focus on ideas that can be applied right now to make your life better or to help you better understand yourself and the world, aiming to teach you the best mental tools to enhance your learning, self-improvement efforts, and decision-making. • We take on important, thorny questions like: • What's the best way to help a friend or loved one going through a difficult time? How can we make our worldviews more accurate? How can we hone the accuracy of our thinking? What are the advantages of using our "gut" to make decisions? And when should we expect careful, analytical reflection to be more effective? Why do societies sometimes collapse? And what can we do to reduce the chance that ours collapses? Why is the world today so much worse than it could be? And what can we do to make it better? What are the good and bad parts of tradition? And are there more meaningful and ethical ways of carrying out important rituals, such as honoring the dead? How can we move beyond zero-sum, adversarial negotiations and create more positive-sum interactions?