Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

DataCamp

Introduction to PySpark

via DataCamp

Overview

Learn to implement distributed data management and machine learning in Spark using the PySpark package.

In this course, you'll learn how to use Spark from Python! Spark is a tool for doing parallel computation with large datasets and it integrates well with Python. PySpark is the Python package that makes the magic happen. You'll use this package to work with data about flights from Portland and Seattle. You'll learn to wrangle this data and build a whole machine learning pipeline to predict whether or not flights will be delayed. Get ready to put some Spark in your Python code and dive into the world of high-performance machine learning!

Syllabus

  • Getting to know PySpark
    • In this chapter, you'll learn how Spark manages data and how can you read and write tables from Python.
  • Manipulating data
    • In this chapter, you'll learn about the pyspark.sql module, which provides optimized data queries to your Spark session.
  • Getting started with machine learning pipelines
    • PySpark has built-in, cutting-edge machine learning routines, along with utilities to create full machine learning pipelines. You'll learn about them in this chapter.
  • Model tuning and selection
    • In this last chapter, you'll apply what you've learned to create a model that predicts which flights will be delayed.

Taught by

Nick Solomon and Lore Dirick

Reviews

Start your review of Introduction to PySpark

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.