The Making of an Exabyte-scale Data Lakehouse Using Apache Ozone
Offered By: The ASF via YouTube
Course Description
Overview
Explore the creation of an Exabyte-scale Data Lakehouse in this 41-minute conference talk from The ASF. Discover how Apache Ozone object store enables effortless scaling to exabytes of data, empowering high-performance queries while reducing costs and carbon footprint. Learn about the collaborative efforts between Ozone and key stakeholders like Hive, Impala, Spark, Nifi, and Iceberg communities to ensure optimal integration. Delve into recent integration endeavors aimed at providing a cohesive Data Lakehouse experience on the Ozone platform. Gain insights from speakers Saketa Chalamchala, a Sr. Software Engineer at Cloudera, and Siddharth Wagle as they cover topics including Ozone overview, Hadoop architecture, Ozone building blocks, metadata duality, Lakehouse sizing, and data migration. Conclude with a demonstration showcasing the practical applications of this cutting-edge technology.
Syllabus
Introduction
Agenda
Requirements
Ozone
Ozone overview
Ozone differentiators
Hadoop Architecture
Highlevel component architecture
Ozone building blocks
What does Ozone do
Metadata
Duality
Lakehouse sizing
Data migration
Demo
Taught by
The ASF
Related Courses
Building Modern Data Streaming Apps with Open SourceLinux Foundation via YouTube How to Stabilize a GenAI-First Modern Data LakeHouse - Provisioning 20,000 Ephemeral Data Lakes per Year
CNCF [Cloud Native Computing Foundation] via YouTube Data Storage and Queries
DeepLearning.AI via Coursera Delivering Portability to Open Data Lakes with Delta Lake UniForm
Databricks via YouTube Fast Copy-On-Write in Apache Parquet for Data Lakehouse Upserts
Databricks via YouTube