Rethinking the lifecycle of AI when it comes to deepfakes and kids

The following content may be disturbing to some listeners. For years, child sexual abuse material was mostly distributed by mail. Authorities used investigative techniques to stem its spread. That got a lot harder when the internet came along. And AI has supercharged the problem. “Those 750,000 predators that are online at any given time looking to connect with minor[s] … they just need to find a picture of a child and use the AI to generate child sexual abuse materials and superimpose these faces on something that is inappropriate,” says child safety advocate and TikTokker Tiana Sharifi. The nonprofit Thorn has created new design principles aimed at fighting child sexual abuse. Rebecca Portnoff, the organization’s vice president of data science, says tech companies need to develop better technology to detect AI-generated images and commit not to use this material to train AI models.

Om Podcasten

Monday through Friday, Marketplace demystifies the digital economy in less than 10 minutes. We look past the hype and ask tough questions about an industry that’s constantly changing.