How existing safety mitigation safeguards fail in LLMs with Khaoula Chehbouni, PhD Researcher at McGill and MILA

Large Language Models, or LLMs, may be the most popular type of AI systems, often seen as an alternative to search engines, even though they should not as the information they throw at users only resemble and mimic human speech and is not always factual, among many other issues that are talked about in this episode. Our guest today is Khaoula Chehbouni, she is a PhD Student in Computer Science at McGill University and Mila (Quebec AI Institute). Khaoula was awarded the prestigious FRQNT Doctoral Training Scholarship to research fairness and safety in large language models. She previously worked as a Senior Data Scientist at Statistics Canada and completed her Masters in Business Intelligence at HEC Montreal, where she received the Best Master Thesis award. In this episode, we talked about the impact of Western narratives on which LLMs are trained, the limits of trust and safety, how racism and stereotypes are mirrored and amplified by LLMs, and what it is like to be a minority in a STEM academic environment. I hope you’ll enjoy this episode.

Om Podcasten

Shifting the narrative from Big Tech to Responsible Tech by sharing & archiving the work of change makers.At the intersection of technology and social justice, Activists Of Tech is a seasonal weekly podcast dedicated to the amplification and archival of minority voices among activists, thought leaders, academics, and practitioners of responsible tech. Shifting the narrative from Big Tech to responsible tech takes honesty: this is a "say it as it is" type of podcast, and no topic is too taboo not to be named and addressed.The topics covered encompass a variety of responsible tech areas and focus on social justice, AI harm, AI bias, AI regulation and advocacy, minorities in tech, gender equality, tech and democracy, social media, and algorithmic recommendations, to name a few. We also talk about solutions and how to make tech inclusive and beneficial for all.