AI can’t handle the truth when it comes to the law

Almost one in five lawyers are using AI, according to an American Bar Association survey. But there are a growing number of legal horror stories involving tools like ChatGPT, because chatbots have a tendency to make stuff up — such as legal precedents from cases that never happened. Marketplace’s Meghan McCarty Carino spoke with Daniel Ho at Stanford’s Institute for Human-Centered Artificial Intelligence about the group’s recent study on how frequently three of the most popular language models from ChatGPT, Meta and Google hallucinate when asked to weigh in or assist with legal cases.

Om Podcasten

Every weekday, host Kai Ryssdal helps you make sense of the day's business and economic news — no econ degree or finance background required. "Marketplace" takes you beyond the numbers, bringing you context. Our team of reporters all over the world speak with CEOs, policymakers and regular people just trying to get by.