Deeply Declarative Data Pipelines with Flink and Kubernetes
Offered By: Confluent via YouTube
Course Description
Overview
Explore the potential of deeply declarative data pipelines on Kubernetes in this 38-minute conference talk. Discover how to deploy stream processing jobs using only SQL and YAML, leveraging Flink and Kubernetes for a low-code approach that significantly reduces development time. Delve into the complexities of data pipelines beyond streaming SQL, addressing challenges such as integrating various systems, managing schemas, and handling extensive configuration. Investigate the extent to which streaming data pipelines on Kubernetes can be made declarative by incorporating additional operators into the stack. Learn about Confluent's innovative approach to data infrastructure, focusing on data in motion and enabling real-time, multi-source data streaming across organizations.
Syllabus
Deeply Declarative Data Pipelines
Taught by
Confluent
Related Courses
Developing Stream Processing Applications with AWS KinesisPluralsight Developing Stream Processing Applications with AWS Kinesis
Pluralsight Conceptualizing the Processing Model for the AWS Kinesis Data Analytics Service
Pluralsight Processing Streaming Data Using Apache Flink
Pluralsight Complex Event Processing Using Apache Flink
Pluralsight