PySpark for Data Science - Intermediate
Offered By: Udemy
Course Description
Overview
What you'll learn:
- This module on PySpark Tutorials aims to explain the intermediate concepts such as those like the use of Spark session in case of later versions and the use of Spark Config and Spark Context in case of earlier versions.
- his will also help you in understanding how the Spark related environment is set up, concepts of Broadcasting and accumulator, other optimization techniques include those like parallelism, tungsten, and catalyst optimizer.
This module on PySpark Tutorials aims to explain the intermediate concepts such as those like the use of Spark session in case of later versions and the use of Spark Config and Spark Context in case of earlier versions. This will also help you in understanding how the Spark-related environment is set up, concepts of Broadcasting and accumulator, other optimization techniques include those like parallelism, tungsten, and catalyst optimizer. You will also be taught about the various compression techniques such as Snappy and Zlib. We will also understand and talk about the various Big data ecosystem related concepts such as HDFS and block storage, various components of Spark such as Spark Core, Mila, GraphX, R, Streaming, SQL, etc. and will also study the basics of Python language which is related and relevant to be used along with Apache Spark thereby making it Pyspark. We will learn the following in this course:
Regression
Linear Regression
Output Column
Test Data
Prediction
Generalized Linear Regression
Forest Regression
Classification
Binomial Logistic Regression
Multinomial Logistic Regression
Decision Tree
Random Forest
Clustering
K-Means Model
Pyspark is a big data solution that is applicable for real-time streaming using Python programming language and provides a better and efficient way to do all kinds of calculations and computations. It is also probably the best solution in the market as it is interoperable i.e. Pyspark can easily be managed along with other technologies and other components of the entire pipeline. The earlier big data and Hadoop techniques included batch time processing techniques.
Pyspark is an open-source program where all the codebase is written in Python which is used to perform mainly all the data-intensive and machine learning operations. It has been widely used and has started to become popular in the industry and therefore Pyspark can be seen replacing other spark based components such as the ones working with Java or Scala. One unique feature which comes along with Pyspark is the use of datasets and not data frames as the latter is not provided by Pyspark. Practitioners need more tools that are often more reliable and faster when it comes to streaming real-time data. The earlier tools such as Map-reduce made use of the map and the reduce concepts which included using the mappers, then shuffling or sorting and then reducing them into a single entity. This MapReduce provided a way of parallel computation and calculation. The Pyspark makes use of in-memory techniques that don’t make use of the space storage being put into the hard disk. It provides a general-purpose and a faster computation unit.
Taught by
Exam Turf
Related Courses
Big DataUniversity of Adelaide via edX Advanced Data Science with IBM
IBM via Coursera Analysing Unstructured Data using MongoDB and PySpark
Coursera Project Network via Coursera Apache Spark for Data Engineering and Machine Learning
IBM via edX Apache Spark (TM) SQL for Data Analysts
Databricks via Coursera