The end of the "best open LLM"

Modeling the compute versus performance tradeoff of many open LLMs.This is AI generated audio with Python and 11Labs.Source code: https://github.com/natolambert/interconnects-toolsOriginal post: https://www.interconnects.ai/p/compute-efficient-open-llms0:00 The end of the "best open LLM"3:05 Compute efficient open LLMsFig 1: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/scaling/img_004.jpegFig 2: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/scaling/img_009.pngFig 3: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/scaling/img_014.pngFig 4: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/scaling/img_016.pngFig 5: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/scaling/img_018.pngFig 6: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/scaling/img_020.pngFig 7: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/scaling/img_022.pngFig 8: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/scaling/img_024.pngFig 9: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/scaling/img_028.png This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.interconnects.ai/subscribe

Om Podcasten

Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai