485: Oh, Algorithms!

There's been a lot of discussion about algorithmic bias, but the focus has been on bias in historical data. However. it's a much bigger problem than that, so what about looking forward? That's what Spark is doing this week: A look at why it's so difficult to encode fairness, and why a rising computer science star still believes we can use machine learning for social good. + For the past several years, we've seen headlines about algorithms producing racist, nonsensical, and even dangerous results, alongside calls for greater transparency and oversight. But what's really at the heart of the problem? Sidney Fussell, a technology journalist at WIRED, looks at issues in machine learning, surveillance, and the abuse of data. + Machine learning and artificial intelligence research has shown time and again how difficult it can be to code AI that doesn’t produce unintentional bias. Companies like Google and Amazon have even come under fire for programming AI that’s been found to produce sexist and racist results. So why is it so difficult to encode social values into machine learning? Brian Christian is an AI authority and the author of the new book, The Alignment Problem: Machine Learning and Human Values. He explains why coding social value isn’t as simple as telling a program to ignore variables like race and gender. + Machine learning algorithms are used in everything from facial recognition technology to determining credit scores. Concerns are mounting about how machine learning can unintentionally reinforce bias and negatively affect marginalized communities. But can machine learning be used to advance social good and open up opportunities for marginalized people? Rediet Abebe is a computer scientist and co-founder of Mechanism Design for Social Good. She explains how we can design systems to create opportunity and avoid bias.

Om Podcasten

Spark on CBC Radio One Nora Young helps you navigate your digital life by connecting you to fresh ideas in surprising ways.