Hallucinations

In this episode host Jerry Cuomo introduces Mihai Criveti as the teammate he turns to first with any AI-related questions. Together, they tackle the complex issue of hallucinations in Large Language Models (LLMs). Mihai clarifies why these models can sometimes produce misleading or incorrect outputs and offers practical solutions like few-shot prompting and Retrieval Augmented Generation (RAG). If you're a business leader interested in implementing AI or simply curious about its limitations, this episode is a must-listen. Gain valuable insights from two experts as they discuss how to use these models more responsibly and effectively.For a deeper understanding of the topics discussed in this episode, we highly recommend reading Mihai Criveti's article, "Understanding GenAI Large Language Model Limitations, and How Retrieval Augmented Generation Can Help," available on Medium.Key Takeaways:[00:32 - 01:06] Intro[01:54 - 04:10] LLMs and their limitations[04:32 - 07:31] What is meant by hallucination[07:39 - 10:28] How to mitigate hallucinations* Coverart was created with the assistance of DALL·E 2 by OpenAI. ** Music for the podcast created by Mind The Gap Band - Cox, Cuomo, Haberkorn, Martin, Mosakowski, and Rodriguez

Om Podcasten

Join Jerry Cuomo, IBM Fellow and tech innovator, on Wild Ducks, a podcast that provides a personal and behind-the-scenes look at emerging tech trends, including AI, cybersecurity, quantum computing, and blockchain. Inspired by Thomas J. Watson Jr.'s wild duck analogy, this podcast celebrates unconventional thinkers who challenge the status quo. Enjoy a mix of expert interviews and engaging fictional guests as they explore how these technologies transform industries and empower individuals. Subscribe now to stay ahead with cutting-edge innovations. See you on the tech side!