We will cover all the components of the Hadoop ecosystem at a high-level and Spark in depth as Spark is gaining lot of momentum in the Big Data ecosystem due to its in-memory processing capabilities.
The course contains following five components
- Hadoop Ecosystem
- Python basics
- Spark to be covered in Detail
- Small Projects relevant to Spark
- Prepping up an individual for Job Market - Interview Questions and up to 2 mock interview
Hadoop Ecosystem will be covered at a high-level
- Data
- Evolution of data
- Hadoop EcoSystem
- Hdfs
- MapReduce
- Yarn
- Hive
- H-base
- Sqoop
Python Basics
Python basics will be covered at a high-level
Spark will be covered in Detail
- Introduction to Spark
- Spark Toolset
- Structured API Overview
- Basic Structured Operations
- Working with Different Types of Data
- Aggregations
- Joins
- Data Source
- Spark SQL
- Datasets
- Resilient Distributed Datasets (RDDs)
- Distributed Shared Variables
- How Spark Runs on a Cluster
- Developing Spark Applications
- Monitoring and Debugging
Projects
We will provide a simple, medium and Complex projects to guide an individual to get a real time experience
Support:
- We shall provide support to candidates by providing them a interview questions.
- Providing the model resumes
- Taking up to 2 mock up interviews
Please call me for a demo class. You can take a call post demo class.