#91 - HATTIE ZHOU - Teaching Algorithmic Reasoning via In-context Learning #NeurIPS

Support us! https://www.patreon.com/mlst Hattie Zhou, a PhD student at Université de Montréal and Mila, has set out to understand and explain the performance of modern neural networks, believing it a key factor in building better, more trusted models. Having previously worked as a data scientist at Uber, a private equity analyst at Radar Capital, and an economic consultant at Cornerstone Research, she has recently released a paper in collaboration with the Google Brain team, titled ‘Teaching Algorithmic Reasoning via In-context Learning’. In this work, Hattie identifies and examines four key stages for successfully teaching algorithmic reasoning to large language models (LLMs): formulating algorithms as skills, teaching multiple skills simultaneously, teaching how to combine skills, and teaching how to use skills as tools. Through the application of algorithmic prompting, Hattie has achieved remarkable results, with an order of magnitude error reduction on some tasks compared to the best available baselines. This breakthrough demonstrates algorithmic prompting’s viability as an approach for teaching algorithmic reasoning to LLMs, and may have implications for other tasks requiring similar reasoning capabilities. TOC [00:00:00] Hattie Zhou [00:19:49] Markus Rabe [Google Brain] Hattie's Twitter - https://twitter.com/oh_that_hat Website - http://hattiezhou.com/ Teaching Algorithmic Reasoning via In-context Learning [Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron Courville, Behnam Neyshabur, and Hanie Sedghi] https://arxiv.org/pdf/2211.09066.pdf Markus Rabe [Google Brain]: https://twitter.com/markusnrabe https://research.google/people/106335/ https://www.linkedin.com/in/markusnrabe Autoformalization with Large Language Models [Albert Jiang Charles Edgar Staats Christian Szegedy Markus Rabe Mateja Jamnik Wenda Li Yuhuai Tony Wu] https://research.google/pubs/pub51691/ Discord: https://discord.gg/aNPkGUQtc5 YT: https://youtu.be/80i6D2TJdQ4

Om Podcasten

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).