Confluent, Kafka, Druid, and Flink: The Future of Streaming Data with Kai Waehner

Apache Kafka® is a streaming platform that can handle large-scale, real-time data streams reliably. It’s used for real-time data pipelines, event sourcing, log aggregation, stream processing, and building analytics applications. Apache® Druid is a database designed to provide fast, interactive, and scalable analytics on time-series and event-based data, empowering organizations to derive insights, monitor real-time metrics, and build analytics applications. Naturally, these two things just go together and are often both key parts of a company’s data architecture. Confluent is one of those companies. On this episode, Kai Waehner, Field CTO at Confluent walks us through how they use Kafka and Druid together, where Apache Flink fits into the mix and shares insights and trends from the world of data streaming.

Om Podcasten

Tales at Scale cracks open the world of analytics projects. We’ll be diving into Apache Druid but also hearing from folks in the data ecosystem tackling everything from architecture to open source, from scaling to streaming and everything in between- brought to you by Imply.