#69 Getting Data Sharing Right at Netflix Scale - Interview w/ Justin Cunningham

Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/Please Rate and Review us on your podcast app of choice!If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereEpisode list and links to all available episode transcripts here.Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.Transcript for this episode (link) provided by Starburst. See their Data Mesh Summit recordings here and their great data mesh resource center hereJustin's LinkedIn: https://www.linkedin.com/in/justincinmd/In this episode, Scott interviewed Justin Cunningham, who worked as a tech lead and data architect on data platforms at Netflix, Yelp, and Atlassian over the last 8.5 years. In that time, Justin was involved in initiatives to push data ownership to developers / domains.To sum up one of Justin's points he touched on repeatedly - he recommends to create a pool of low effort data which will inherently have low quality. Use that for initial research into what might be useful. Focus on maximizing accessibility - you can have governance and use things like join restrictions or give consumers an ability to self-certify that they are using the data responsibly. Once you get the use cases, then you go for the data mesh quality data products. Justin saw a lot of success at Yelp focusing on data availability - getting data to a place it could be found and played with - was a bigger driver for success than focusing initially on data quality. Once people discovered what data was available and how they might use it, the organization was able to work towards getting that data to an acceptable quality level.Another point Justin made was figure out which you want to optimize for in general: getting things right upfront or testing and changing. He believes in optimizing for change. Create an adaptive process and optimize for learning. Keep it simple and focus on value delivery - it will set up more tractable bets.At Yelp, they were trying to ETL a huge amount of data in their data warehouse to build reports for the C-Suite. But they were never really going to get enough data ingested to really meet their goals. It was taking them 2 weeks to create each new set of ETLs and that was just creation, not maintenance - it was looking like they'd need 5x the number of people. What Justin found the most useful at Yelp was to focus on getting as much "usable" data in an automated way. They achieved this initially through the data mesh anti-pattern of copying direct from the underlying operational data stores and building business logic on top of it. But, that data getting into the hands of the data team meant there could be an initial value assessment - once they proved...

Om Podcasten

Interviews with data mesh practitioners, deep dives/how-tos, anti-patterns, panels, chats (not debates) with skeptics, "mesh musings", and so much more. Host Scott Hirleman (founder of the Data Mesh Learning Community) shares his learnings - and those of the broader data community - from over a year of deep diving into data mesh. Each episode contains a BLUF - bottom line, up front - so you can quickly absorb a few key takeaways and also decide if an episode will be useful to you - nothing worse than listening for 20+ minutes before figuring out if a podcast episode is going to be interesting and/or incremental ;) Hoping to provide quality transcripts in the future - if you want to help, please reach out! Data Mesh Radio is also looking for guests to share their experience with data mesh! Even if that experience is 'I am confused, let's chat about' some specific topic. Yes, that could be you! You can check out our guest and feedback FAQ, including how to submit your name to be a guest and how to submit feedback - including anonymously if you want - here: https://docs.google.com/document/d/1dDdb1mEhmcYqx3xYAvPuM1FZMuGiCszyY9x8X250KuQ/edit?usp=sharing Data Mesh Radio is committed to diversity and inclusion. This includes in our guests and guest hosts. If you are part of a minoritized group, please see this as an open invitation to being a guest, so please hit the link above. If you are looking for additional useful information on data mesh, we recommend the community resources from Data Mesh Learning. All are vendor independent. https://datameshlearning.com/community/ You should also follow Zhamak Dehghani (founder of the data mesh concept); she posts a lot of great things on LinkedIn and has a wonderful data mesh book through O'Reilly. Plus, she's just a nice person: https://www.linkedin.com/in/zhamak-dehghani/detail/recent-activity/shares/ Data Mesh Radio is provided as a free community resource by DataStax. If you need a database that is easy to scale - read: serverless - but also easy to develop for - many APIs including gRPC, REST, JSON, GraphQL, etc. all of which are OSS under the Stargate project - check out DataStax's AstraDB service :) Built on Apache Cassandra, AstraDB is very performant and oh yeah, is also multi-region/multi-cloud so you can focus on scaling your company, not your database. There's a free forever tier for poking around/home projects and you can also use code DAAP500 for a $500 free credit (apply under payment options): https://www.datastax.com/products/datastax-astra?utm_source=DataMeshRadio