Structured Streaming in Apache Spark 2
Offered By: Pluralsight
Course Description
Overview
Apache Spark 2 is the analytics engine you need to support your streaming applications. Learn about Apache Spark 2 for support with streaming applications.
Stream processing applications work with continuously updated data and react to changes in real-time. Data frames in Spark 2.x support infinite data, thus effectively unifying batch and streaming applications. In this course, Structured Streaming in Apache Spark 2, you'll focus on using the tabular data frame API to work with streaming, unbounded datasets using the same APIs that work with bounded batch data. First, you'll start off by understanding how structured streaming works and what makes it different and more powerful than traditional streaming applications; the basic streaming architecture and the improvements included in structured streaming allowing it to react to data in real-time. Then you'll create triggers to evaluate streaming results and output modes to write results out to file or screen. Next, you'll discover how you can build streaming pipelines using Spark by studying event time aggregations, grouping and windowing functions, and how to perform join operations between batch and streaming data. You'll even work with real Twitter streams and perform analysis on trending hashtags on Twitter. Finally, you'll then see how Spark stream processing integrates with the Kafka distributed publisher-subscriber system by ingesting Twitter data from a Kafka producer and process it using Spark Streaming. By the end of this course, you'll be comfortable performing analysis of stream data using Spark's distributed analytics engine and its high-level structured streaming API.
Topics:
Stream processing applications work with continuously updated data and react to changes in real-time. Data frames in Spark 2.x support infinite data, thus effectively unifying batch and streaming applications. In this course, Structured Streaming in Apache Spark 2, you'll focus on using the tabular data frame API to work with streaming, unbounded datasets using the same APIs that work with bounded batch data. First, you'll start off by understanding how structured streaming works and what makes it different and more powerful than traditional streaming applications; the basic streaming architecture and the improvements included in structured streaming allowing it to react to data in real-time. Then you'll create triggers to evaluate streaming results and output modes to write results out to file or screen. Next, you'll discover how you can build streaming pipelines using Spark by studying event time aggregations, grouping and windowing functions, and how to perform join operations between batch and streaming data. You'll even work with real Twitter streams and perform analysis on trending hashtags on Twitter. Finally, you'll then see how Spark stream processing integrates with the Kafka distributed publisher-subscriber system by ingesting Twitter data from a Kafka producer and process it using Spark Streaming. By the end of this course, you'll be comfortable performing analysis of stream data using Spark's distributed analytics engine and its high-level structured streaming API.
Topics:
- Course Overview
- Understanding the High Level Streaming API in Spark 2.x
- Building Advanced Streaming Pipelines Using Structured Streaming
- Integrating Apache Kafka with Structured Streaming
Taught by
Janani Ravi
Related Courses
Big DataUniversity of Adelaide via edX Advanced Data Science with IBM
IBM via Coursera Analysing Unstructured Data using MongoDB and PySpark
Coursera Project Network via Coursera Apache Spark for Data Engineering and Machine Learning
IBM via edX Apache Spark (TM) SQL for Data Analysts
Databricks via Coursera