Microsoft Reducing AI Compute Requirements with Small Language Models

“Microsoft is making a bet that we’re not going to need a single AI, we’re going to need many different AIs” Sebastien Bubeck, Microsoft’s vice president of generative-AI research, tells Bloomberg senior technology analyst Anurag Rana. In this Tech Disruptors episode, the two examine the differences between a large language model like ChatGPT-4o and a small language model such as Microsoft’s Phi-3 family. Bubeck and Rana account for various use cases of the models across various industries and workflows. The two also compare the costs and differences in compute/GPU requirements between SLMs and LLMs.

Om Podcasten

Tech Disruptors by Bloomberg Intelligence features conversations with thought leaders and management teams on disruptive trends. Topics covered in this series include cloud, e-commerce, cybersecurity, AI, 5G, streaming, advertising, EVs, automation, crypto, fintech, AR/VR, metaverse and Web 3.0. This podcast is intended for professional investors only. It is being prepared solely for informational purposes only and does not constitute an offer or investment advice.