“I bet Greg Colbourn 10 k€ that AI will not kill us all by the end of 2027” by Vasco Grilo

Agreement78 % of my donations so far have gone to the Long-Term Future Fund[1] (LTFF), which mainly supports AI safety interventions. However, I have become increasingly sceptical about the value of existential risk mitigation, and currently think the best interventions are in the area of animal welfare[2]. As a result, I realised it made sense for me to arrange a bet with someone very worried about AI in order to increase my donations to animal welfare interventions. Gregory Colbourn (Greg) was the 1st person I thought of. He said: I think AGI [artificial general intelligence] is 0-5 years away and p(doom|AGI) is ~90% I doubt doom in the sense of human extinction is anywhere as likely as suggested by the above. I guess the annual extinction risk over the next 10 years is 10^-7, so I proposed a bet to Greg similar to the end-of-the-world bet between [...] ---Outline:(00:07) Agreement(03:53) Impact(05:18) AcknowledgementsThe original text contained 5 footnotes which were omitted from this narration. --- First published: June 4th, 2024 Source: https://forum.effectivealtruism.org/posts/GfGxaPBAMGcYjv8Xd/i-bet-greg-colbourn-10-keur-that-ai-will-not-kill-us-all-by --- Narrated by TYPE III AUDIO.

Om Podcasten

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma. If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.