Scaling LLMs and Accelerating Adoption with Aidan Gomez at Cohere

On this episode, we’re joined by Aidan Gomez, Co-Founder and CEO at Cohere. Cohere develops and releases a range of innovative AI-powered tools and solutions for a variety of NLP use cases.We discuss:- What “attention” means in the context of ML.- Aidan’s role in the “Attention Is All You Need” paper.- What state-space models (SSMs) are, and how they could be an alternative to transformers. - What it means for an ML architecture to saturate compute.- Details around data constraints for when LLMs scale.- Challenges of measuring LLM performance.- How Cohere is positioned within the LLM development space.- Insights around scaling down an LLM into a more domain-specific one.- Concerns around synthetic content and AI changing public discourse.- The importance of raising money at healthy milestones for AI development.Aidan Gomez - https://www.linkedin.com/in/aidangomez/Cohere - https://www.linkedin.com/company/cohere-ai/Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.Resources:- https://cohere.ai/- “Attention Is All You Need”#OCR #DeepLearning #AI #Modeling #ML

Om Podcasten

Gradient Dissent is a machine learning podcast from Weights & Biases with hosts Lukas Biewald, Lavanya Shukla and Caryn Marooney. It takes you behind-the-scenes to learn how industry leaders are putting deep learning models in production at NVIDIA, Meta, Google, Lyft, OpenAI, and more.