Professor Shannon Vallor on the AI Mirror

What if we saw Artificial Intelligence as a mirror rather than as a form of intelligence?That’s the subject of a fabulous new book by Professor Shannon Vallor, who is my guest on this episode.In our discussion, we explore how artificial intelligence reflects not only our technological prowess but also our ethical choices, biases, and the collective values that shape our world.We also discuss how AI systems mirror our societal flaws, raising critical questions about accountability, transparency, and the role of ethics in AI development.  Shannon helps me to examine the risks and opportunities presented by AI, particularly in the context of decision-making, privacy, and the potential for AI to influence societal norms and behaviours.  This episode offers a thought-provoking exploration of the intersection between technology and ethics, urging us to consider how we can steer AI development in a direction that aligns with our shared values.  Guest Biography Prof. Shannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh, where she is also appointed in Philosophy.  She is Director of the Centre for Technomoral Futures in EFI, and co-Director of the BRAID (Bridging Responsible AI Divides) programme, funded by the Arts and Humanities Research Council. Professor Vallor's research explores how new technologies, especially AI, robotics, and data science, reshape human moral character, habits, and practices.   Her work includes advising policymakers and industry on the ethical design and use of AI. She is a standing member of the One Hundred Year Study of Artificial Intelligence (AI100) and a member of the Oversight Board of the Ada Lovelace Institute. Professor Vallor received the 2015 World Technology Award in Ethics from the World Technology Network and the 2022 Covey Award from the International Association of Computing and Philosophy.  She is a former Visiting Researcher and AI Ethicist at Google. In addition to her many articles and published educational modules on the ethics of data, robotics, and artificial intelligence, she is the author of the book Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016) and The AI Mirror: Reclaiming Our Humanity in an Age of Machine Thinking (Oxford University Press, 2024).  LinksShannon's website: https://www.shannonvallor.net/ The AI Mirror: https://global.oup.com/academic/product/the-ai-mirror-9780197759066?A Noema essay by Shannon on the dangers of AI: https://www.noemamag.com/the-danger-of-superhuman-ai-is-not-what-you-think/   A New Yorker feature on the book  https://www.newyorker.com/culture/open-questions/in-the-age-of-ai-what-makes-people-unique   The AI Mirror as one of the FT’s technology books of the summer https://www.ft.com/content/77914d8e-9959-4f97-98b0-aba5dffd581c  The FT review of The AI Mirror:

Om Podcasten

People are often described as the largest asset in most organisations. They are also the biggest single cause of risk. This podcast explores the topic of 'human risk', or "the risk of people doing things they shouldn't or not doing things they should", and examines how behavioural science can help us mitigate it. It also looks at 'human reward', or "how to get the most out of people". When we manage human risk, we often stifle human reward. Equally, when we unleash human reward, we often inadvertently increase human risk.