Disclosure: Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Reviews

In-Depth Review: Introduction to Big Data with Apache Spark from UC Berkeley on edX

Detailed review by Class Central user Prakhar Srivastav on this short introductory course which explores the Spark API.

Review by Prakhar Srivastav. Originally published here.  Took the course? Write your own review hereRead all reviews.

Last night I finished the final assignment for the new course that I had been working on in the past week called Intro to Big Data with Apache Spark or CS100.1 x. With the course over, I decided to write down a quick review in the hope that it will help others get an idea of what they can expect by enrolling in this popular MOOC by UC Berkeley.

I heard about the course after stumbling upon a laudatory review by an existing student on Hacker News. After learning that the course involves programming assignments that teach you the Spark API, I couldn’t help signing up.

TL;DR – If you’re a programmer who wants to explore what the hype on Spark is all about and considers solving code katas a great way to spend your evenings, then take this course – you will not be disappointed.

spark

If you’re a programmer who wants to explore what the hype on Spark is all about and considers solving code katas a great way to spend your evenings, then take this course – you will not be disappointed. 

APACHE SPARK

So what is Apache Spark? Spark is a fast and general-purpose cluster computing system that is used for large-scale data processing. The project was developed by the AMPlab at UC Berkeley and has now evolved into a standalone company – Databricks that, among other things, provides enterprise consulting and training for Apache Spark.

Why should you care? Well, if you’re working as a data scientist or a data engineer you’re no stranger to the monumental growth of Spark in the big data field. Spark has been taking the big-data world by storm after showing consistent benchmarks that boast of it being up to a hundred times faster than Hadoop. Late last year, Spark also made waves by setting the world record for large-scale on-disk sorting.

Spark has been taking the big-data world by storm after showing consistent benchmarks that boast of it being up to a hundred times faster than Hadoop 

In order to evangelize Spark and help developers / data scientists learn about it, Databricks and UC Berkeley teamed up to start a Big Data Xseries on EdX that consists of two MOOCs – CS 100.1x followed by Scalable Machine Learning.

The Big Data XSeries, created in partnership with Databricks, will articulate the expected output of Data Scientists and then teach students how to use PySpark (part of Apache Spark) to deliver against these expectations. Students will gain hands-on experience applying big data principles using Apache Spark, a cluster computing system well-suited for large-scale machine learning tasks.

PEDAGOGY

edx Big Data with Apache Spark

With the elevator pitch out of the way, let’s get back to the review. The entire course spans five weeks (including week zero which dealt only with the setup) with the material consisting of lecture videos, short quizzes and programming assignments. Unlike the typical technical MOOCs that I’ve done in the past, there are no exams and surprisingly the last week has no lecture videos to watch.

The lecture videos, in my opinion, were pretty light in content.  

The lecture videos, in my opinion, were pretty light in content. Even on a lethargic evening, it took me roughly somewhere around thirty minutes to breeze through the videos and answer the quizzes. The first week started off with talking about Apache Spark and how it is different from Hadoop. The two subsequent weeks focused on issues with data management and talked about how a typical big data processing pipeline looks like. The penultimate video in week four spent roughly five mins talking about Collaborative Filtering and Spark MLib which somehow seemed enough to the teaching staff to justify a programming assignment on the topic.

The instructors are quite active on the forums and the staff is quick to sort out any hiccups with the autograder 

As far as forums and class interactions go, the instructors are quite active on the forums and the staff is quick to sort out any hiccups with the autograder. I got stuck a couple of times but was quickly able to find a way by searching for previous threads in the forums.

PROGRAMMING ASSIGNMENTS

Coming to my initial motivation for taking this course, if are you wondering whether this course delivered on its promise on teaching Spark then the answer is a resounding yes! The programming assignments were exceptionally well thought-out and interesting. Each week’s lab exercise was presented as an IPython notebook which provided the ideal environment for interacting with and exploring the Spark’s Python API. (As an aside, if you haven’t checked out IPython notebooks yet, please do yourselves a favor and give it a go.)

What the course lacked in the lectures was more than compensated for in the extremely detailed and well presented notebooks 

What the course lacked in the lectures was more than compensated for in the extremely detailed and well presented notebooks. The instructors leveraged the entire set of IPython capabilities wonderfully as the notebooks were interspersed with diagrams, LaTeX equations and even plots courtesy of Matplotlib.

The typical structure followed in the programming assignments was

⚫ Introduce a topic with theory and examples
⚫ Give a small coding exercise on solving the problem
⚫ Run your code against the test cases provided

Rather than having a big problem statement for the students to solve in one go, the bite-sized problems and test cases provided instant feedback and the necessary encouragement to keep forging ahead. Lastly, much to the project’s credit the Spark documentation was well written which made the process of finding a function that does X to using it quite gratifying.

Even though Spark supports variety of languages R, Spark SQL, Java, Python and Scala, the instructors choice to go ahead with Python and IPython clearly shows how powerful of a teaching tool it can be.

SETUP

★★★★★
Introduction to Big Data with Apache Spark is a great place to start learning about distributed computing if you know some Python.
– Review By Gregory J Hamel 

Before jumping on to the first assignment, I was reasonably certain that setting up a cluster computing system was going to be a big pain – even on a single machine. The courseware’s long setup document and set of five videos also corroborated my fears. What turned out to be delight though that it took me less than ten minutes to get the entire setup running.

Showing great wisdom on their part, the course designers made the process extremely straightforward by providing a custom Vagrant image which had everything installed. Since I use Vagrant on a daily basis, going from never running Spark to a functioning Spark environment was just a matter of few commands.

$ wget <course_materails_url>
$ cd bigdata_course
$ vagrant up
# 5 mins later ... IPython is running on 8001

The VM also came bundled with all the required IPython notebooks and data associated with the labs which meant that you needed to setup everything just once.

CONCLUSION

CS 100.1x is probably the course where I’ve spent the least amount of time and had the most amount of fun (Steve Huffman’s web dev course comes a close second). I started a week later and within 3 weekends I was able to finish the material. While that could be attributed to my prior experience with machine learning and functional programming, from what I could see in the forums students were able to finish the assignments within reasonable time.

What this goes to show is that the material is light, focused and can be done with quickly if you find it interesting – much like any other crash course. For someone who’s curious to understand how Spark works and wants to dip their toes in the water, I doubt there can be a anything more fruitful than spending a couple of weekends working through the programming assignments.

Overall, I had a great time learning about the Spark computation model, its API and how it can be efficiently used to process large amounts of data. Although the path from completing this course to setting up a data processing pipeline on a cluster for your company is a tough one, I feel that the material provided me enough tools to go and explore further.

Editors Note: Review by Prakhar Srivastav. Prakhar loves building things (mostly for the web) and celebrating the joy of programming. In his free time he enjoys gaming, doing courses and trying out new programming languages. Prakhar’s transcript

Tags

Comments 2

  1. Thong Peter

    I spent 12 hours a week on average to finish the weekly assignment and got a 100% score. I think this course is tough for those who are new to data science.

    Reply
  2. Dev

    Spark just got a 5 start rating on devmetric. “Run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk. Fast!!!! and Open!!!!” Agree, disagree? Post your own opinion and be heard! http://www.evansdata.com/net/public/index.php?c=1&t=0&v=96

    Reply

Leave a reply

Your email address will not be published. All comments go through moderation, so your comment won't display immediately.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Browse our catalog

Discover thousands of free online courses from top universities around the world like MIT, Stanford, and Harvard.

Browse all subjects