Apache Spark Fundamentals
Offered By: Pluralsight
Course Description
Overview
This course will teach you how to use Apache Spark to analyze your big data at lightning-fast speeds; leaving Hadoop in the dust! For a deep dive on SQL and Streaming check out the sequel, Handling Fast Data with Apache Spark SQL and Streaming.
Our ever-connected world is creating data faster than Moore's law can keep up, making it so that we have to be smarter in our decisions on how to analyze it. Previously, we had Hadoop's MapReduce framework for batch processing, but modern big data processing demands have outgrown this framework. That's where Apache Spark steps in, boasting speeds 10-100x faster than Hadoop and setting the world record in large scale sorting. Spark's general abstraction means it can expand beyond simple batch processing, making it capable of such things as blazing-fast, iterative algorithms and exactly once streaming semantics. In this course, you'll learn Spark from the ground up, starting with its history before creating a Wikipedia analysis application as one of the means for learning a wide scope of its core API. That core knowledge will make it easier to look into Spark's other libraries, such as the streaming and SQL APIs. Finally, you'll learn how to avoid a few commonly encountered rough edges of Spark. You will leave this course with a tool belt capable of creating your own performance-maximized Spark application.
Our ever-connected world is creating data faster than Moore's law can keep up, making it so that we have to be smarter in our decisions on how to analyze it. Previously, we had Hadoop's MapReduce framework for batch processing, but modern big data processing demands have outgrown this framework. That's where Apache Spark steps in, boasting speeds 10-100x faster than Hadoop and setting the world record in large scale sorting. Spark's general abstraction means it can expand beyond simple batch processing, making it capable of such things as blazing-fast, iterative algorithms and exactly once streaming semantics. In this course, you'll learn Spark from the ground up, starting with its history before creating a Wikipedia analysis application as one of the means for learning a wide scope of its core API. That core knowledge will make it easier to look into Spark's other libraries, such as the streaming and SQL APIs. Finally, you'll learn how to avoid a few commonly encountered rough edges of Spark. You will leave this course with a tool belt capable of creating your own performance-maximized Spark application.
Syllabus
- Getting Started 39mins
- Spark Core: Part 1 55mins
- Spark Core: Part 2 28mins
- Distribution and Instrumentation 47mins
- Spark Libraries 63mins
- Optimizations and the Future 21mins
Taught by
Justin Pihony
Related Courses
Amazon DynamoDB Service Primer (French)Amazon Web Services via AWS Skill Builder Amazon DynamoDB Service Primer (German)
Amazon Web Services via AWS Skill Builder Amazon DynamoDB Service Primer (Italian)
Amazon Web Services via AWS Skill Builder Amazon DynamoDB Service Primer (Korean)
Amazon Web Services via AWS Skill Builder Amazon DynamoDB Service Primer (Simplified Chinese)
Amazon Web Services via AWS Skill Builder