Inside OpenAI's trust and safety operation - with Rosie Campbell

No organisation in the AI world is under more intense scrutiny than OpenAI. The maker of Dall-E, GPT4, ChatGPT and Sora is constantly pushing the boundaries of artificial intelligence and has supercharged the enthusiasm of the general public for AI technologies. With that elevated position come questions about how OpenAI can ensure its models are not used for malign purposes. In this interview we talk to Rosie Campbell from OpenAI’s policy research team about the many processes and safeguard...

Om Podcasten

Knowledge Distillation is the podcast that brings together a mixture of experts from across the Artificial Intelligence community. We talk to the world’s leading researchers about their experiences developing cutting-edge models as well as the technologists taking AI tools out of the lab and turning them into commercial products and services.  Knowledge Distillation also takes a critical look at the impact of artificial intelligence on society – opting for expert analysis instead of hysterical headlines.We are committed to featuring at least 50% female voices on the podcast – elevating the many brilliant women working in AI.Host Helen Byrne is a VP at the British AI compute systems maker Graphcore where she leads the Solution Architects team, helping innovators build their AI solutions using Graphcore’s technology.  Helen previously led AI Field Engineering and worked in AI Research, tackling problems in distributed machine learning. Before landing in Artificial Intelligence, Helen worked in FinTech, and as a secondary school teacher. Her background is in mathematics and she has a MSc in Artificial Intelligence. Knowledge Distillation is produced by Iain Mackenzie.