ChatGPT has a language problem — but science can fix it

AIs built on Large Language Models have wowed by producing particularly fluent text. However, their ability to do this is limited in many languages. As the data and resources used to train a model in a specific language drops, so does the performance of the model, meaning that for some languages the AIs are effectively useless.Researchers are aware of this problem and are trying to find solutions, but the challenge extends far beyond just the technical, with moral and social questions to be answered. This podcast explores how Large Language Models could be improved in more languages and the issues that could be caused if they are not.Watch our related video of people trying out ChatGPT in different languages. Hosted on Acast. See acast.com/privacy for more information.

Om Podcasten

The Nature Podcast brings you the best stories from the world of science each week. We cover everything from astronomy to zoology, highlighting the most exciting research from each issue of the Nature journal. We meet the scientists behind the results and provide in-depth analysis from Nature's journalists and editors. Hosted on Acast. See acast.com/privacy for more information.