Audio long-read: Rise of the robo-writers

In 2020, the artificial intelligence (AI) GPT-3 wowed the world with its ability to write fluent streams of text. Trained on billions of words from books, articles and websites, GPT-3 was the latest in a series of ‘large language model’ AIs that are used by companies around the world to improve search results, answer questions, or propose computer code.However, these large language model are not without their issues. Their training is based on the statistical relationships between the words and phrases, which can lead to them generating toxic or dangerous outputs.Preventing responses like these is a huge challenge for researchers, who are attempting to do so by addressing biases in training data, or by instilling these AIs with common-sense and moral judgement.This is an audio version of our feature: Robo-writers: the rise and risks of language-generating AI Hosted on Acast. See acast.com/privacy for more information.

Om Podcasten

The Nature Podcast brings you the best stories from the world of science each week. We cover everything from astronomy to zoology, highlighting the most exciting research from each issue of the Nature journal. We meet the scientists behind the results and provide in-depth analysis from Nature's journalists and editors. Hosted on Acast. See acast.com/privacy for more information.