Concrete actions anyone can take to help improve AI safety (with Kat Woods)

Read the full transcript here. (https://podcast.clearerthinking.org/episode/217/#transcript) • Why should we consider slowing AI development? Could we slow down AI development even if we wanted to? What is a "minimum viable x-risk"? What are some of the more plausible, less Hollywood-esque risks from AI? Even if an AI could destroy us all, why would it want to do so? What are some analogous cases where we slowed the development of a specific technology? And how did they turn out? What are some reasonable, feasible regulations that could be implemented to slow AI development? If an AI becomes smarter than humans, wouldn't it also be wiser than humans and therefore more likely to know what we need and want and less likely to destroy us? Is it easier to control a more intelligent AI or a less intelligent one? Why do we struggle so much to define utopia? What can the average person do to encourage safe and ethical development of AI? • Kat Woods is a serial charity entrepreneur who's founded four effective altruist charities. She runs Nonlinear (https://www.nonlinear.org/), an AI safety charity. Prior to starting Nonlinear, she co-founded Charity Entrepreneurship (https://www.charityentrepreneurship.com), a charity incubator that has launched dozens of charities in global poverty and animal rights. Prior to that, she co-founded Charity Science Health (https://www.givewell.org/charities/charity-science-health/all-content), which helped vaccinate 200,000+ children in India, and, according to GiveWell's estimates at the time, was similarly cost-effective to AMF. You can follow her on Twitter at @kat__woods (https://twitter.com/kat__woods); you can read her EA writing here (https://forum.effectivealtruism.org/users/katherinesavoie?sortedBy=topAdjusted) and here (https://www.lesswrong.com/users/ea247?sortedBy=topAdjusted); and you can read her personal blog here (https://www.katwoods.org/). • Further reading • Robert Miles AI Safety @ YouTube (https://www.youtube.com/c/robertmilesai) • "The AI Revolution: The Road to Superintelligence", by Tim Urban (https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html) • Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World, by Darren McKee (https://www.amazon.com/Uncontrollable-Threat-Artificial-Superintelligence-World/dp/B0CNNYKVH1) • The Nonlinear Network (https://www.nonlinear.org/network.html) • PauseAI (https://pauseai.info/) • Dan Hendrycks @ Manifund (https://manifund.org/hendrycks) (AI regrantor) • Adam Gleave @ Manifund (https://manifund.org/AdamGleave) (AI regrantor) • Staff • Spencer Greenberg (https://www.spencergreenberg.com/) — Host / Director • Josh Castle (mailto:joshrcastle@gmail.com) — Producer • Ryan Kessler (https://tone.support/) — Audio Engineer • Uri Bram (https://uribram.com/) — Factotum • Jennifer Vanderhoof — Transcriptionist • Music • Broke for Free (https://freemusicarchive.org/music/Broke_For_Free/Something_EP/Broke_For_Free_-_Something_EP_-_05_Something_Elated) • Josh Woodward (https://www.joshwoodward.com/song/AlreadyThere) • Lee Rosevere (https://archive.org/details/MusicForPodcasts04/Lee+Rosevere+-+Music+for+Podcasts+4+-+11+Keeping+Stuff+Together.flac) • Quiet Music for Tiny Robots (https://www.freemusicarchive.org/music/Quiet_Music_for_Tiny_Robots/The_February_Album/05_Tiny_Robot_Armies) • wowamusic (https://gamesounds.xyz/?dir=wowamusic) • zapsplat.com (https://www.zapsplat.com/music/summer-haze-slow-chill-out-house-track-with-a-modern-pop-feel-warm-piano-chords-underpin-the-track-with-warm-pads-and-a-repetitive-synth-arpeggio/) • Affiliates • Clearer Thinking (https://www.clearerthinking.org/) • GuidedTrack (https://guidedtrack.com/) • Mind Ease (https://mindease.io/) • Positly (https://positly.com/) • UpLift (https://www.uplift.app/) [Read more: https://podcast.clearerthinking.org/episode/217/kat-woods-concrete-actions-anyone-can-take-to-help-improve-ai-safety]

Om Podcasten

Clearer Thinking is a podcast about ideas that truly matter. If you enjoy learning about powerful, practical concepts and frameworks, wish you had more deep, intellectual conversations in your life, or are looking for non-BS self-improvement, then we think you'll love this podcast! Each week we invite a brilliant guest to bring four important ideas to discuss for an in-depth conversation. Topics include psychology, society, behavior change, philosophy, science, artificial intelligence, math, economics, self-help, mental health, and technology. We focus on ideas that can be applied right now to make your life better or to help you better understand yourself and the world, aiming to teach you the best mental tools to enhance your learning, self-improvement efforts, and decision-making. • We take on important, thorny questions like: • What's the best way to help a friend or loved one going through a difficult time? How can we make our worldviews more accurate? How can we hone the accuracy of our thinking? What are the advantages of using our "gut" to make decisions? And when should we expect careful, analytical reflection to be more effective? Why do societies sometimes collapse? And what can we do to reduce the chance that ours collapses? Why is the world today so much worse than it could be? And what can we do to make it better? What are the good and bad parts of tradition? And are there more meaningful and ethical ways of carrying out important rituals, such as honoring the dead? How can we move beyond zero-sum, adversarial negotiations and create more positive-sum interactions?