Creating Responsible AI in the Face of Our Ignorance

We want to create AI that makes accurate predictions. We want that not only because we want our products to work, but also because reliable products are, all else equal, ethically safe products. But we can’t always know whether our AI is accurate. Our ignorance leaves us with a question: which of the various AI models that we’ve developed is the right one for this particular use case? In some circumstances, we might decide that using AI isn’t the right call. We just don’t know enough. In other instances, we may know enough, but we also have to choose our model in light of the ethical values we’re trying to achieve. Julia and I talk about this and a lot of other (ethical) problems that beset AI practitioners on the ground, and what can and cannot be done about it. Dr. Julia Stoyanovich is Associate Professor of Computer Science & Engineering and of Data Science, and Director of the Center for Responsible AI at NYU.  Her goal is to make “responsible AI” synonymous with “AI”.  Julia has co-authored over 100 academic publications, and has written for the New York Times, the Wall Street Journal and Le Monde.  She engages in technology policy, has been teaching responsible AI to students, practitioners and the public, and has co-authored comic books on this topic. She received her Ph.D. in Computer Science from Columbia University.

Om Podcasten

I talk with the smartest people I can find working or researching anywhere near the intersection of emerging technologies and their ethical impacts. From AI to social media to quantum computers and blockchain. From hallucinating chatbots to AI judges to who gets control over decentralized applications. If it’s coming down the tech pipeline (or it’s here already), we’ll pick it apart, figure out its implications, and break down what we should do about it.