AI hallucinations: Turn on, tune in, beep boop

ChatGPT isn’t always right. In fact, it’s often very wrong, giving faulty biographical information about a person or whiffing on the answers to simple questions. But instead of saying it doesn’t know, ChatGPT often makes stuff up. Chatbots can’t actually lie, but researchers sometimes call these untruthful performances “hallucinations”—not quite a lie, but a vision of something that isn’t there. So, what’s really happening here and what does it tell us about the way that AI systems err? Presented by Deloitte Episode art by Vicky Leta

Om Podcasten

We’re fascinated by everyday objects and what they can tell us about the global economy. Join us every week as reporters from our global newsroom dig into the most fascinating facets of an object: where it came from, how it got to us, and what it can tell us about the forces that are changing the way we live and work.