Simplifying data pipelines with Apache Kafka
Offered By: IBM via Cognitive Class
Course Description
Overview
Lesson 1 - Introduction to Apache Kafka
- What Kafka is and why it was created
- The Kafka Architecture
- The main components of Kafka
- Some of the use cases for Kafka
Lesson 2 - Kafka Command Line
- The contents of Kafka's /bin directory
- How to start and stop Kafka
- How to create new topics
- How to use Kafka command line tools to produce and consume messages
Lesson 3 - Kafka Producer Java API
- The Kafka producer client
- Some of the KafkaProducer configuration settings and what they do
- How to create a Kafka producer using the Java API and send messages both synchronously and asynchronously
Lesson 4 - Kafka Consumer Java API
- The Kafka consumer client
- Some of the KafkaConsumer configuration settings and what they do
- How to create a Kafka consumer using the Java API
Lesson 5 - Kafka Connect and Spark Streaming
- Kafka Connect and how to use a pre-built connector
- Some of the components of Kafka Connect
- How to use Kafka and Spark Streaming together
Syllabus
- Have taken the Hadoop 101 course.
- Recommended skills prior to taking this course
- Basic understanding of Apache Hadoop and Big Data.
- Basic Linux Operating System knowledge.
- Basic understanding of the Scala, Python, R, or Java programming languages.
Tags
Related Courses
Google Cloud Big Data and Machine Learning Fundamentals en EspañolGoogle Cloud via Coursera Data Analysis with Python
IBM via Coursera Intro to TensorFlow 日本語版
Google Cloud via Coursera TensorFlow on Google Cloud - Français
Google Cloud via Coursera Freedom of Data with SAP Data Hub
SAP Learning