Understanding Deep Learning - Prof. SIMON PRINCE [STAFF FAVOURITE]

Watch behind the scenes, get early access and join private Discord by supporting us on Patreon: https://patreon.com/mlst https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk In this comprehensive exploration of the field of deep learning with Professor Simon Prince who has just authored an entire text book on Deep Learning, we investigate the technical underpinnings that contribute to the field's unexpected success and confront the enduring conundrums that still perplex AI researchers. Key points discussed include the surprising efficiency of deep learning models, where high-dimensional loss functions are optimized in ways which defy traditional statistical expectations. Professor Prince provides an exposition on the choice of activation functions, architecture design considerations, and overparameterization. We scrutinize the generalization capabilities of neural networks, addressing the seeming paradox of well-performing overparameterized models. Professor Prince challenges popular misconceptions, shedding light on the manifold hypothesis and the role of data geometry in informing the training process. Professor Prince speaks about how layers within neural networks collaborate, recursively reconfiguring instance representations that contribute to both the stability of learning and the emergence of hierarchical feature representations. In addition to the primary discussion on technical elements and learning dynamics, the conversation briefly diverts to audit the implications of AI advancements with ethical concerns. Follow Prof. Prince: https://twitter.com/SimonPrinceAI https://www.linkedin.com/in/simon-prince-615bb9165/ Get the book now! https://mitpress.mit.edu/9780262048644/understanding-deep-learning/ https://udlbook.github.io/udlbook/ Panel: Dr. Tim Scarfe - https://www.linkedin.com/in/ecsquizor/ https://twitter.com/ecsquendor TOC: [00:00:00] Introduction [00:11:03] General Book Discussion [00:15:30] The Neural Metaphor [00:17:56] Back to Book Discussion [00:18:33] Emergence and the Mind [00:29:10] Computation in Transformers [00:31:12] Studio Interview with Prof. Simon Prince [00:31:46] Why Deep Neural Networks Work: Spline Theory [00:40:29] Overparameterization in Deep Learning [00:43:42] Inductive Priors and the Manifold Hypothesis [00:49:31] Universal Function Approximation and Deep Networks [00:59:25] Training vs Inference: Model Bias [01:03:43] Model Generalization Challenges [01:11:47] Purple Segment: Unknown Topic [01:12:45] Visualizations in Deep Learning [01:18:03] Deep Learning Theories Overview [01:24:29] Tricks in Neural Networks [01:30:37] Critiques of ChatGPT [01:42:45] Ethical Considerations in AI References on YT version VD: https://youtu.be/sJXn4Cl4oww

Om Podcasten

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).