Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Stanford University

Stanford Seminar - HPC Opportunities in Deep Learning - Greg Diamos, Baidu

Stanford University via YouTube

Overview

This course explores the opportunities for High-Performance Computing (HPC) in the field of Deep Learning. By the end of the course, learners will understand the workload characteristics of deep learning models, the importance of work efficiency and speed of light in HPC, and various techniques such as Elastic SGD, optimized kernels, and memory-efficient backpropagation. The course covers skills like dense compute, fast interconnects, model parallelism, and low precision training. The teaching method includes lectures on deep learning scales, specialized systems, and potential pitfalls in ignoring efficiency and speed considerations. This course is intended for individuals interested in leveraging HPC for deep learning applications.

Syllabus

Introduction.
Success this year.
Deep speech 2.
Deep learning scales.
The opportunity for HPC.
Workload characteristics.
Beware of ignoring work efficiency.
Beware of ignoring speed of light.
Dense compute.
Fast Interconnects.
Elastic SGD.
Optimized kernels.
Specialized 10 systems.
Memory efficient back propagation.
Model parallelism.
Low precision training.
Low precision issues.

Taught by

Stanford Online

Reviews

Start your review of Stanford Seminar - HPC Opportunities in Deep Learning - Greg Diamos, Baidu

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.