Deploying Scalable Machine Learning Models for Long-Term Sustainability

As machine learning models proliferate and become sophisticated, deploying them to the cloud becomes increasingly expensive. This challenge of optimizing the model also impacts the scale and requires the flexibility to move the models to different hardware like Graphic Processing Units (GPUs) or Central Processing Units (CPUs) to gain more advantage. The ability to accelerate the deployment of machine learning models to the cloud or edge at scale is shifting the way organizations build next-generation AI models and applications. And being able to optimize these models quickly to save costs and sustain them over time is moving to the forefront for many developers. In this episode of The New Stack Makers podcast recorded at AWS re:Invent, Luis Ceze, co-founder and CEO of OctoML talks about how to optimize and deploy machine learning models on any hardware, cloud or edge devices. Alex Williams, founder and publisher of The New Stack hosted this podcast.

Om Podcasten

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack