S3E10 - Avoid AB Testing Statistics Mistakes with Max LI

A/B testing plays a critical role in decision-making - but so does AB Testing Statistics Mistakes. In data-driven companies, A/B tests not only prove positive impacts of product features (treatments) but also provide evidence to safely roll out neutral features that are not expected to move business metrics. When rolling out a neutral feature, we need to test if the treatment is non-inferior to control, and conventional A/B testing is not suitable in this case. In my talk, I present when conventional A/B testing would fail, what the non-inferiority test is, and how is non-inferiority test better when testing treatment is not worse than control. We've published this topic here https://bit.ly/3vmFsYp About the speaker: Max is a senior data scientist at Wish where he focuses on experimentation (A/B testing) and machine learning. He has been improving the A/B testing platform at Wish on various fronts, including infrastructure, statistical testing, usability, etc. His passion is to empower data-driven decision-making through the rigorous use of data. Max earned his Ph.D. in Statistical Informatics from the University of Arizona.

Om Podcasten

Every week we share: - Interviews with the Best Conversion Rate Optimization (CRO) professionals and Experimenters from around the world - Conference sessions / Tutorials Recent recognition: - Top 10% of most followed Podcasts on Spotify