Kalavakkam, Thiruporur, India - 603110.
Verified 14
Details verified of Rajesh J.✕
Identity
Education
Know how UrbanPro verifies Tutor details
Identity is verified based on matching the details uploaded by the Tutor with government databases.
Tamil Mother Tongue (Native)
English Proficient
Anna University, Chennai 2005
Bachelor of Technology (B.Tech.)
Kalavakkam, Thiruporur, India - 603110
ID Verified
Education Verified
Phone Verified
Email Verified
Report this Profile
Is this listing inaccurate or duplicate? Any other problem?
Please tell us about the problem and we will fix it.
Class Location
Online (video chat via skype, google hangout etc)
Student's Home
Tutor's Home
Years of Experience in Python Training classes
16
Course Duration provided
1-3 months
Seeker background catered to
Individual, Educational Institution
Certification provided
Yes
Python applications taught
PySpark
Teaching Experience in detail in Python Training classes
Apache Spark Programming in Python (PySpark) If you are looking to expand your knowledge in data engineering or want to level up your portfolio by adding Spark programming to your skill-set, then you are in the right place. This course will help you understand Spark programming and apply that knowledge to build data engineering solutions. This course is example-driven and follows a working session-like approach. We will be taking a live coding approach and explaining all the concepts needed along the way. In this course, we will start with a quick introduction to Apache Spark, then set up our environment by installing and using Apache Spark. Next, we will learn about Spark execution model and architecture, and about Spark programming model and developer experience. Next, we will cover the Spark structured API foundation and then move towards Spark data sources and sinks. Then we will cover Spark Dataframe and dataset transformations. We will also cover aggregations in Apache Spark, and finally, we will cover Spark Dataframe joins. By the end of this course, you will be able to build data engineering solutions using Spark structured API in Python. Audience This course is designed for software engineers willing to develop a data engineering pipeline and application using Apache Spark; for data architects and data engineers who are responsible for designing and building the organization’s data-centric infrastructure; for managers and architects who do not directly work with Spark implementation, but work with the people who implement Apache Spark at the ground level. This course does not require any prior knowledge of Apache Spark or Hadoop; only programming knowledge using Python programming language is required.
Class Location
Online (video chat via skype, google hangout etc)
Student's Home
Tutor's Home
Years of Experience in Big Data Training
16
Big Data Technology
Hadoop, Scala, Apache Spark
Teaching Experience in detail in Big Data Training
If you are looking to expand your knowledge in data engineering or want to level up your portfolio by adding Spark programming to your skill-set, then you are in the right place. This course will help you understand Spark programming and apply that knowledge to build data engineering solutions. This course is example-driven and follows a working session-like approach. We will be taking a live coding approach and explaining all the concepts needed along the way. In this course, we will start with a quick introduction to Apache Spark, then set up our environment by installing and using Apache Spark. Next, we will learn about Spark execution model and architecture, and about Spark programming model and developer experience. Next, we will cover the Spark structured API foundation and then move towards Spark data sources and sinks. Then we will cover Spark Dataframe and dataset transformations. We will also cover aggregations in Apache Spark, and finally, we will cover Spark Dataframe joins. By the end of this course, you will be able to build data engineering solutions using Spark structured API in Python. What You Will Learn Learn Apache Spark Foundation and Spark architecture Learn data engineering and data processing in Spark Work with data sources and sinks Use PyCharm IDE for Spark development and debugging Learn unit testing, managing application logs, and cluster deployment Audience This course is designed for software engineers willing to develop a data engineering pipeline and application using Apache Spark; for data architects and data engineers who are responsible for designing and building the organization’s data-centric infrastructure; for managers and architects who do not directly work with Spark implementation but work with the people who implement Apache Spark at the ground level. This course does not require any prior knowledge of Apache Spark or Hadoop; only programming knowledge using Python programming language is required.
Class Location
Online (video chat via skype, google hangout etc)
Student's home
Tutor's Home
Years of Experience in Microsoft Azure Training
16
Azure Certification offered
Azure Certified Developer, Azure Certified Data Engineer
Teaching Experience in detail in Microsoft Azure Training
This class is for anyone who is interested in learning about data engineering with PySpark, Hadoop, and Azure Cloud. It is especially well-suited for: Software engineers who want to transition to a data engineering role Data scientists who want to learn more about data engineering to build better data pipelines and models Business analysts who want to learn more about data engineering to make better data-driven decisions What will the students learn in this class? In this class, students will learn the following: What is data engineering and why is it important? Setting up a single-node cluster in Windows and learning Installation and configuration of Hadoop, Hive, MySql, Apache Spark. Fundamental concepts and hands-on Bigdata, Hadoop, HDFS, Hive, Python and Azure Cloud Learn Apache Spark Foundation and Spark architecture Use PyCharm IDE for Spark development and debugging Learning Apache Spark (PySpark) programming knowledge from zero to intermediate level Learning how to source data from different formats, transform to valuable data, and store in various formats. Complete knowledge about RDD, DataFrame and Spark SQL, Spark Catalyst. Exception Handling, Debuging, Deploying Spark program in Python flovor. Create an orchestration and transformation job in Azure DataFactory (ADF) Develop, execute, and monitor data flows using Azure Synapse Create big data pipelines using Databricks and Delta tables Work with big data in Azure Data Lake using Spark Pool Migrate on-premises SSIS jobs to ADF Integrate ADF with commonly used Azure services, such as Azure ML, Azure Logic Apps, and Azure Functions Run big data compute jobs within HDInsight and Azure Databricks Copy data from AWS S3 and Google Cloud Storage to Azure Storage using ADF's built-in connectors Is there anything the students need to bring to the class? Students should bring a laptop with a minimum hardware configuration of 8GB RAM, Intel i5 Processor and 100GB HDD/SDD. Benefits of attending the free demo course By the end of this course, you will be able to build data engineering solutions using Spark structured API in Python. By the end of this course, you’ll be able to use ADF as the main ETL and orchestration tool for your data warehouse or data platform projects. Learn about the latest trends and technologies in data engineering Get hands-on experience with PySpark, Hadoop, and Azure Cloud Ask questions and get expert advice from experienced instructors.
1. Which classes do you teach?
I teach Big Data, Microsoft Azure Training and Python Training Classes.
2. Do you provide a demo class?
Yes, I provide a free demo class.
3. How many years of experience do you have?
I have been teaching for 16 years.
Class Location
Online (video chat via skype, google hangout etc)
Student's Home
Tutor's Home
Years of Experience in Python Training classes
16
Course Duration provided
1-3 months
Seeker background catered to
Individual, Educational Institution
Certification provided
Yes
Python applications taught
PySpark
Teaching Experience in detail in Python Training classes
Apache Spark Programming in Python (PySpark) If you are looking to expand your knowledge in data engineering or want to level up your portfolio by adding Spark programming to your skill-set, then you are in the right place. This course will help you understand Spark programming and apply that knowledge to build data engineering solutions. This course is example-driven and follows a working session-like approach. We will be taking a live coding approach and explaining all the concepts needed along the way. In this course, we will start with a quick introduction to Apache Spark, then set up our environment by installing and using Apache Spark. Next, we will learn about Spark execution model and architecture, and about Spark programming model and developer experience. Next, we will cover the Spark structured API foundation and then move towards Spark data sources and sinks. Then we will cover Spark Dataframe and dataset transformations. We will also cover aggregations in Apache Spark, and finally, we will cover Spark Dataframe joins. By the end of this course, you will be able to build data engineering solutions using Spark structured API in Python. Audience This course is designed for software engineers willing to develop a data engineering pipeline and application using Apache Spark; for data architects and data engineers who are responsible for designing and building the organization’s data-centric infrastructure; for managers and architects who do not directly work with Spark implementation, but work with the people who implement Apache Spark at the ground level. This course does not require any prior knowledge of Apache Spark or Hadoop; only programming knowledge using Python programming language is required.
Class Location
Online (video chat via skype, google hangout etc)
Student's Home
Tutor's Home
Years of Experience in Big Data Training
16
Big Data Technology
Hadoop, Scala, Apache Spark
Teaching Experience in detail in Big Data Training
If you are looking to expand your knowledge in data engineering or want to level up your portfolio by adding Spark programming to your skill-set, then you are in the right place. This course will help you understand Spark programming and apply that knowledge to build data engineering solutions. This course is example-driven and follows a working session-like approach. We will be taking a live coding approach and explaining all the concepts needed along the way. In this course, we will start with a quick introduction to Apache Spark, then set up our environment by installing and using Apache Spark. Next, we will learn about Spark execution model and architecture, and about Spark programming model and developer experience. Next, we will cover the Spark structured API foundation and then move towards Spark data sources and sinks. Then we will cover Spark Dataframe and dataset transformations. We will also cover aggregations in Apache Spark, and finally, we will cover Spark Dataframe joins. By the end of this course, you will be able to build data engineering solutions using Spark structured API in Python. What You Will Learn Learn Apache Spark Foundation and Spark architecture Learn data engineering and data processing in Spark Work with data sources and sinks Use PyCharm IDE for Spark development and debugging Learn unit testing, managing application logs, and cluster deployment Audience This course is designed for software engineers willing to develop a data engineering pipeline and application using Apache Spark; for data architects and data engineers who are responsible for designing and building the organization’s data-centric infrastructure; for managers and architects who do not directly work with Spark implementation but work with the people who implement Apache Spark at the ground level. This course does not require any prior knowledge of Apache Spark or Hadoop; only programming knowledge using Python programming language is required.
Class Location
Online (video chat via skype, google hangout etc)
Student's home
Tutor's Home
Years of Experience in Microsoft Azure Training
16
Azure Certification offered
Azure Certified Developer, Azure Certified Data Engineer
Teaching Experience in detail in Microsoft Azure Training
This class is for anyone who is interested in learning about data engineering with PySpark, Hadoop, and Azure Cloud. It is especially well-suited for: Software engineers who want to transition to a data engineering role Data scientists who want to learn more about data engineering to build better data pipelines and models Business analysts who want to learn more about data engineering to make better data-driven decisions What will the students learn in this class? In this class, students will learn the following: What is data engineering and why is it important? Setting up a single-node cluster in Windows and learning Installation and configuration of Hadoop, Hive, MySql, Apache Spark. Fundamental concepts and hands-on Bigdata, Hadoop, HDFS, Hive, Python and Azure Cloud Learn Apache Spark Foundation and Spark architecture Use PyCharm IDE for Spark development and debugging Learning Apache Spark (PySpark) programming knowledge from zero to intermediate level Learning how to source data from different formats, transform to valuable data, and store in various formats. Complete knowledge about RDD, DataFrame and Spark SQL, Spark Catalyst. Exception Handling, Debuging, Deploying Spark program in Python flovor. Create an orchestration and transformation job in Azure DataFactory (ADF) Develop, execute, and monitor data flows using Azure Synapse Create big data pipelines using Databricks and Delta tables Work with big data in Azure Data Lake using Spark Pool Migrate on-premises SSIS jobs to ADF Integrate ADF with commonly used Azure services, such as Azure ML, Azure Logic Apps, and Azure Functions Run big data compute jobs within HDInsight and Azure Databricks Copy data from AWS S3 and Google Cloud Storage to Azure Storage using ADF's built-in connectors Is there anything the students need to bring to the class? Students should bring a laptop with a minimum hardware configuration of 8GB RAM, Intel i5 Processor and 100GB HDD/SDD. Benefits of attending the free demo course By the end of this course, you will be able to build data engineering solutions using Spark structured API in Python. By the end of this course, you’ll be able to use ADF as the main ETL and orchestration tool for your data warehouse or data platform projects. Learn about the latest trends and technologies in data engineering Get hands-on experience with PySpark, Hadoop, and Azure Cloud Ask questions and get expert advice from experienced instructors.
Reply to 's review
Enter your reply*
Your reply has been successfully submitted.
Certified
The Certified badge indicates that the Tutor has received good amount of positive feedback from Students.