AWS Hadoop Fundamentals introduces you to the basics of big data and how Hadoop as a framework handles it. This course discusses Hadoop architectures and how large sets of data are stored and processed. The course explains several tools used in the process: MapReduce, Hive, and Pig. The course also examines Hadoop as part of the AWS big data ecosystem.
Intended Audience
This course is intended for:
- Any individuals interested in learning the fundamental concepts of Hadoop, MapReduce, Hive, and Pig
- Individuals responsible for designing and implementing big data solutions
Course Objectives
In this course, you will learn how to:
- Describe the Hadoop framework and tools used
- Explain what MapReduce is and how it processes data
- Explain how the Hive data warehouse system is leveraged with Hadoop
- Identify the components of Hive and Pig
- Describe the Pig Latin query language
- Recognize how Hadoop fits into the AWS big data ecosystem
Prerequisites
We recommend that attendees of this course have the following prerequisites:
- Basic familiarity with big data workloads
Delivery Method
This course is delivered through:
- Digital training
Duration
- 90 minutes
Course Outline
This course covers the following concepts:
- Big data and Hadoop
- Hadoop and MapReduce
- Hive and Pig
- AWS and Big Data