#openbox Bias identification and mitigation with Patrick Hall - Part 2

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.  Today, we have with us Patrick Hall. Patrick is a Assistant Professor at George Washington University. He is conducting research in support of the NIST AI Risk Management Framework and a contributor to NIST work on building a Standard for Identifying and Managing Bias in Artificial Intelligence. He is also the collaborator running the open-source initiative called “Awesome Machine Learning Interpretability” which maintains and curates a list of practical and awesome responsible machine learning resources. He is also one of the authors of Machine Learning for High Risk Applications released by O’reilly. He is also managing the AI incident Database. This is part 2 of the episode He spoke about key approaches for bias mitigation and the limitations therein. He also discusses the open problems in this area. --- Send in a voice message: https://podcasters.spotify.com/pod/show/ryan-carrier3/message

Om Podcasten

ATGO AI is podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence).