342. Superalignment with Sam Altman’s Values

We talk about how everybody on the superalignment team at OpenAI—focused on safety, risk, adversarial testing, societal impacts, and existential concerns—is resigning, including high-profile people like Illya Sutskever. And nobody can talk about it because of draconian rules (even for Silicon Valley) about non-disclosure and non-disparagement people must sign (or risk their vested equity) upon exiting the company. For us, the turmoil of OpenAI is indicative of conflict between true believers (superalignment) and cynical operators (Sam Altman). Outro: Aunty Donna – Real Estate Agents https://www.youtube.com/watch?v=VGm267O04a8 ••• “I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence ••• ChatGPT can talk, but OpenAI employees sure can’t https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release Subscribe to hear more analysis and commentary in our premium episodes every week! https://www.patreon.com/thismachinekills Hosted by Jathan Sadowski (www.twitter.com/jathansadowski) and Edward Ongweso Jr. (www.twitter.com/bigblackjacobin). Production / Music by Jereme Brown (www.twitter.com/braunestahl)

Om Podcasten

A podcast about technology and political economy /// Agitprop against innovation and capital /// Hosted by Jathan Sadowski and Edward Ongweso Jr., Produced by Jereme Brown /// Hello friends and enemies Listen anywhere that fine podcasts are distributed. Subscribe at patreon.com/thismachinekills to get premium episodes every week.