Hadoop Developer Resume Samples

A Hadoop Developer is accountable for coding and programming applications that run on Hadoop. The job description is just as similar to that of a Software Developer. The specific duties mentioned on the Hadoop Developer Resume include the following – undertaking the task of Hadoop development and implementation; loading from disparate data sets; pre-processing using Pig and Hive; designing and configuring and supporting Hadoop; translating complex functional and technical requirements, performing analysis of vast data, managing and deploying HBase; and proposing best practices and standards.

The possible skill sets that can attract an employer include the following – knowledge in Hadoop; good understanding of back-end programming such as Java, Node.js and OOAD; ability to write MapReduce jobs; good knowledge of database structures, principles and practices; HiveQL proficiency, and knowledge of workflow like Oozie. Those looking for a career path in this line should earn a computer degree and get professionally trained in Hadoop.

Looking for drafting your winning cover letter? See our sample Hadoop Developer Cover Letter.
Hadoop Developer Resume example

Hadoop Developer Resume

Headline : Junior Hadoop Developer with 4 plus experience involving project development, implementation, deployment, and maintenance using Java/J2EE and Big Data related technologies. Hadoop Developer with 4+ years of working experience in designing and implementing complete end-to-end Hadoop based data analytics solutions using HDFS, MapReduce, Spark, Yarn, Kafka, PIG, HIVE, Sqoop, Storm, Flume, Oozie, Impala, HBase, etc. Good experience in creating data ingestion pipelines, data transformations, data management, data governance and real-time streaming at an enterprise level.

Skills : HDFS, Map Reduce, Spark, Yarn, Kafka, PIG, HIVE, Sqoop, Storm, Flume, Oozie, Impala, H Base, Hue, And Zookeeper

Hadoop Developer Resume Template
Build Free Resume

Description :

  1. Coordinated with business customers to gather business requirements.
  2. Interacted with other technical peers to derive technical requirements.
  3. Implemented map-reduce programs to handle semi/unstructured data like XML, JSON, Avro data files and sequence files for log files.
  4. Developed Sqoop jobs to import and store massive volumes of data in HDFS and Hive.
  5. Designed and developed pig data transformation scripts to work against unstructured data from various data points and created a baseline.
  6. Experienced in implementing Spark RDD transformations, actions to implement the business analysis.
  7. Designed a data quality framework to perform schema validation and data profiling on spark.
  8. Leveraged spark to manipulate unstructured data and apply text mining on user's table utilization data.
Years of Experience
Experience
5-7 Years
Experience Level
Level
Executive
Education
Education
Masters of Science


Hadoop Developer Temp Resume

Objective : Hadoop Developer with professional experience in IT Industry, involved in Developing, Implementing, Configuring Hadoop ecosystem components on Linux environment, Development and maintenance of various applications using Java, J2EE, developing strategic methods for deploying Big data technologies to efficiently solve Big Data processing requirement. Hands on experience in Hadoop ecosystem components such as HDFS, MapReduce, Yarn, Pig, Hive, HBase, Oozie, Zookeeper, Sqoop, Flume, Impala, Kafka, and Strom. Excellent Programming skills at a higher level of abstraction using Scala and Spark.

Skills : Apache Hadoop, HDFS, Map Reduce, Hive, PIG, OOZIE, SQOOP, Spark, Cloudera Manager, And EMR. Database: MYSQL, Oracle, SQL Server, Hbase

Hadoop Developer Temp Resume Model
Build Free Resume

Description :

  1. Responsible for the design and migration of existing ran MSBI system to Hadoop.
  2. Experience in developing a batch processing framework to ingest data into HDFS, Hive, and HBase.
  3. Completed basic to complex systems analysis, design, and development. Played a key role as an individual contributor on complex projects.
  4. Assisted the client in addressing daily problems/issues of any scope.
  5. Determined feasible solutions and make recommendations.
  6. Directed less experienced resources and coordinate systems development tasks on small to medium scope efforts or on specific phases of larger projects.
  7. Participated with other Development, operations and Technology staff, as appropriate, in overall systems and integrated testing on small to medium scope efforts or on specific phases of larger projects.
  8. Prepared test data and executed the detailed test plans. Completed any required debugging.
Years of Experience
Experience
2-5 Years
Experience Level
Level
Junior
Education
Education
Bachelor Of Engineering


Bigdata/Hadoop Developer Resume

Objective : Experienced Bigdata/Hadoop Developer with experience in developing software applications and support with experience in developing strategic ideas for deploying Big Data technologies to efficiently solve Big Data processing requirements. Strong Understanding in distributed systems, RDBMS, large-scale & small-scale non-relational data stores, NoSQL map-reduce systems, database performance, data modeling, and multi-terabyte data warehouses. Real-time experience in Hadoop Distributed files system, Hadoop framework, and Parallel processing implementation.

Skills : Sqoop, Flume, Hive, Pig, Oozie, Kafka, Map-Reduce, HBase, Spark, Cassandra, Parquet, Avro, Orc. Cloudera CDH5.5, Hortonworks Sandbox

Bigdata/Hadoop Developer Resume Sample
Build Free Resume

Description :

  1. Developed MapReduce jobs in java for data cleaning and preprocessing.
  2. Responsible for developing data pipeline using Flume, Sqoop, and PIG to extract the data from weblogs and store in HDFS.
  3. Responsible for using Cloudera Manager, an end to end tool to manage Hadoop operations.
  4. Worked on loading all tables from the reference source database schema through Sqoop.
  5. Worked on designed, coded and configured server-side J2ee components like JSP, AWS, and Java.
  6. Worked on designing and developing ETL workflows using java for processing data in HDFS/Hbase using Oozie.
  7. Involved in moving all log files generated from various sources to HDFS for further processing through Flume.
  8. Involved in loading and transforming large sets of structured, semi-structured and unstructured data from relational databases into HDFS using Sqoop imports.
  9. Developed Sqoop scripts to import-export data from relational sources and handled incremental loading on the customer, transaction data by date.
  10. Developed simple and complex MapReduce programs in Java for data analysis on different data formats.
Years of Experience
Experience
0-2 Years
Experience Level
Level
Entry Level
Education
Education
Bachelors Of Technology

Java/Hadoop Developer Resume

Headline : Over 5 years of IT experience in software development and support with experience in developing strategic methods for deploying Big Data technologies to efficiently solve Big Data processing requirement. Expertise in Hadoop ecosystem components HDFS, Map Reduce, Yarn, HBase, Pig, Sqoop, Spark, Spark SQL, Spark Streaming and Hive for scalability, distributed computing, and high-performance computing. Experience in using Hive Query Language for Data Analytics.

Skills : HDFS, Map Reduce, Sqoop, Flume, Pig, Hive, Oozie, Impala, Spark, Zookeeper And Cloudera Manager. NO SQL Database HBase, Cassandra Monitoring And Reporting Tableau

Java/Hadoop Developer Resume Template
Build Free Resume

Description :

  1. Provided online premium calculator for nonregistered/registered users provided online customer support like chat, agent locators, branch locators, faqs, best plan selector, to increase the likelihood of a sale.
  2. Installed/configured/maintained Apache Hadoop clusters for application development and Hadoop tools like Hive, Pig, HBase, Zookeeper, and Sqoop.
  3. Wrote the shell scripts to monitor the health check of Hadoop daemon services and respond accordingly to any warning or failure conditions.
  4. Installed and configured Hadoop, MapReduce, HDFS (Hadoop Distributed File System), developed multiple MapReduce jobs in java for data cleaning.
  5. Developed data pipeline using Flume, Sqoop, Pig and Java MapReduce to ingest customer behavioral data and financial histories into HDFS for analysis.
  6. Involved in collecting and aggregating large amounts of log data using apache flume and staging data in HDFS for further analysis.
  7. Worked on installing cluster, commissioning & decommissioning of data nodes, name-node recovery, capacity planning, and slots configuration.
  8. Developed Pig Latin scripts to extract the data from the web server output files to load into HDFS.
  9. Used Pig as ETL tool to do transformations, event joins and some pre-aggregations before storing the data onto HDFS.
Years of Experience
Experience
5-7 Years
Experience Level
Level
Executive
Education
Education
Bachelors Of Engineering

Senior ETL And Hadoop Developer Resume

Headline : A Qualified Senior ETL And Hadoop Developer with 5+ years of experience including experience as a Hadoop developer. Work experience of various phases of SDLC such as Requirement Analysis, Design, Code Construction, and Test. Having basic knowledge about real-time processing tools Storm, Spark Experienced in analyzing data using HiveQL, Pig Latin, and custom MapReduce programs in Java. Having experience with monitoring tools Ganglia, Cloudera Manager, and Ambari. Experienced in importing and exporting data using Sqoop from HDFS to Relational Database Systems, Teradata and vice versa.

Skills : Hadoop Technologies HDFS, MapReduce, Hive, Impala, Pig, Sqoop, Flume, Oozie, Zookeeper, Ambari, Hue, Spark, Strom, Talend

Senior ETL And Hadoop Developer Resume Format
Build Free Resume

Description :

  1. Analyzing the incoming data processing through a series of programmed jobs and deliver the desired output and present the data into the portal so that it could be accessed by different teams for various analysis and sales purpose.
  2. Responsible for building scalable distributed data solutions using Hadoop.
  3. Proficient in using Cloudera Manager, an end-to-end tool to manage Hadoop operations.
  4. Driving the data mapping and data modeling exercise with the stakeholders.
  5. Developed/captured/documented architectural best practices for building systems on AWS.
  6. Used Pig as ETL (Informatica) tool to perform transformations, event joins and pre aggregations before storing the curated data into HDFS.
  7. Launching and setup of Hadoop related tools on AWS, which includes configuring different components of Hadoop.
  8. Collected the logs from the physical machines and the OpenStack controller and integrated into HDFS using flume.
  9. Experienced in migrating Hiveql into Impala to minimize query response time.
Years of Experience
Experience
5-7 Years
Experience Level
Level
Executive
Education
Education
Masters of Science

Bigdata/Hadoop Developer Resume

Headline : Bigdata/Hadoop Developer with around 7+ years of IT experience in software development with experience in developing strategic methods for deploying Big Data technologies to efficiently solve Big Data processing requirement. Experience with distributed systems, large-scale non-relational data stores, RDBMS, NoSQL map-reduce systems. Working experience in Hadoop framework, Hadoop Distributed File System and Parallel Processing implementation. Hands-on experience with the overall Hadoop eco-system - HDFS, Map Reduce, Pig/Hive, Hbase, Spark.

Skills : HDFS, MapReduce, YARN, Hive, Pig, HBase, Zookeeper, SQOOP, OOZIE, Apache Cassandra, Flume, Spark, Java Beans, JavaScript, Web Services

Bigdata/Hadoop Developer Resume Sample
Build Free Resume

Description :

  1. Installed and configured Hadoop map reduce, HDFS, developed multiple maps reduce jobs in java for data cleaning and preprocessing.
  2. Experience in installing, configuring and using Hadoop ecosystem components.
  3. Experience in importing and exporting data into HDFS and Hive using Sqoop.
  4. Developed ADF workflow for scheduling the cosmos copy, Sqoop activities and hive scripts.
  5. Responsible for creating the dispatch job to load data into Teradata layout worked on big data integration and analytics based on Hadoop, Solr, Spark, Kafka, Storm and Web methods technologies.
  6. Participated in the development/implementation of the cloudera Hadoop environment.
  7. Loaded and transformed large sets of structured, semi-structured and unstructured data.
  8. Experience in working with various kinds of data sources such as Mongo DB and Oracle.
  9. Installed Oozie workflow engine to run multiple map-reduce programs which run independently with time and data.
  10. Experience developing Splunk queries and dashboards targeted at understanding.
  11. Developed python mapper and reducer scripts and implemented them using Hadoop streaming.
Years of Experience
Experience
5-7 Years
Experience Level
Level
Executive
Education
Education
Bachelors of Science

Java/Hadoop Developer Resume

Objective : Java/Hadoop Developer with strong technical, administration and mentoring knowledge in Linux and Bigdata/Hadoop technologies. Experience in designing modeling and implementing big data projects using Hadoop HDFS, Hive, MapReduce, Sqoop, Pig, Flume, and Cassandra. Experience in Designing, Installing, Configuring, Capacity Planning and administrating Hadoop Cluster of major Hadoop distributions Cloudera Manager & Apache Hadoop. Involved in converting Hive queries into Spark SQL transformations using Spark RDDs and Scala.

Skills : Cloudera Manager Web/ App Servers Apache Tomcat Server, JBoss IDE's Eclipse, Microsoft Visual Studio, Net Beans, MS Office Web Technologies HTML, CSS, AJAX, JavaScript, AJAX, And XML

Java/Hadoop Developer Resume Format
Build Free Resume

Description :

  1. Working with engineering leads to strategize and develop data flow solutions using Hadoop, Hive, Java, Perl in order to address long-term technical and business needs.
  2. Developing and running map-reduce jobs on a multi-petabyte yarn and Hadoop clusters which process billions of events every day, to generate daily and monthly reports as per user's need.
  3. Working with R&D, QA, and Operations teams to understand, design, and develop and support the ETL platforms and end-to-end data flow requirements.
  4. Extensive experience in extraction, transformation, and loading of data from multiple sources into the data warehouse and data mart.
  5. Building data insightful metrics feeding reporting and other applications.
  6. Supporting team, like mentoring and training new engineers joining our team and conducting code reviews for data flow/data application implementations.
  7. Implementing a technical solution on POC's, writing programming codes using technologies such as Hadoop, Yarn, Python, and Microsoft SQL server.
  8. Implemented storm to process over a million records per second per node on a cluster of modest size.
Years of Experience
Experience
2-5 Years
Experience Level
Level
Executive
Education
Education
Bachelor Of Technology

Hadoop Developer Resume

Headline : Hadoop Developer having 6+ years of total IT Experience, including 3 years in hands-on experience in Big-data/Hadoop Technologies. 3 years of extensive experience in JAVA/J2EE Technologies, Database development, ETL Tools, Data Analytics.

Skills : Sqoop, Flume, Hive, Pig, Oozie, Kafka, Map-Reduce, HBase, Spark, Cassandra, Parquet, Avro, Orc. Cloudera CDH5.5, Hortonworks Sandbox, Windows Azure Java, Python.

Hadoop Developer Resume Format
Build Free Resume

Description :

  1. Installed and configured Apache Hadoop clusters using yarn for application development and apache toolkits like Apache Hive, Apache Pig, HBase, Apache Spark, Zookeeper, Flume, Kafka, and Sqoop.
  2. Used Sqoop to efficiently transfer data between databases and HDFS and used flume to stream the log data from servers.
  3. Developed MapReduce programs for pre-processing and cleansing the data is HDFS obtained from heterogeneous data sources to make it suitable for ingestion into hive schema for analysis.
  4. Created tasks for incremental load into staging tables, and schedule them to run.
  5. Used Apache Kafka as a messaging system to load log data, data from UI applications into HDFS system.
  6. Used Pig to perform data transformations, event joins, filter and some pre-aggregations before storing the data onto HDFS.
  7. Created hive external tables with partitioning to store the processed data from MapReduce.
  8. Implemented different analytical algorithms using MapReduce programs to apply on top of HDFS data.
  9. Implemented hive optimized joins to gather data from different sources and run ad-hoc queries on top of them.
Years of Experience
Experience
5-7 Years
Experience Level
Level
Executive
Education
Education
Bachelor Of Engineering

Big Data/Hadoop Developer Resume

Objective : Big Data/Hadoop Developer with excellent understanding/knowledge of Hadoop architecture and various components such as HDFS, Job Tracker, Task Tracker, NameNode, DataNode, and MapReduce programming paradigm. Well versed in installing, configuring, administrating and tuning Hadoop cluster of major Hadoop distributions Cloudera CDH 3/4/5, Hortonworks HDP 2.3/2.4 and Amazon Web Services AWS EC2, EBS, S3. Hands on experience in configuring and working with Flume to load the data from multiple sources directly into HDFS.

Skills : Hadoop/Big Data HDFS, MapReduce, Yarn, Hive, Pig, HBase, Sqoop, Flume, Oozie, Zookeeper, Storm, Scala, Spark, Kafka, Impala, HCatalog, Apache Cassandra, PowerPivot

Big Data/Hadoop Developer Resume Template
Build Free Resume

Description :

  1. Responsibilities include interaction with the business users from the client side to discuss and understand ongoing enhancements and changes at the upstream business data and performing data analysis.
  2. Experienced in loading and transforming large sets of structured and semi-structured data from HDFS through Sqoop and placed in HDFS for further processing.
  3. Involved in running Hadoop jobs for processing millions of records of text data.
  4. Involved in transforming data from legacy tables to HDFS, and HBase tables using Sqoop.
  5. Experience in writing map-reduce programs and using Apache Hadoop API for analyzing the data.
  6. Analyzed the data by performing hive queries and running pig scripts to study data patterns.
  7. Monitored Hadoop scripts which take the input from HDFS and load the data into the Hive.
  8. Developed pig scripts to arrange incoming data into suitable and structured data before piping it out for analysis.
  9. Designed appropriate partitioning/bucketing schema to allow faster data retrieval during analysis using hive.
Years of Experience
Experience
2-5 Years
Experience Level
Level
Junior
Education
Education
Bachelors of Science

Hadoop Developer Resume

Summary : Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems and vice-versa. Implemented data ingestion from multiple sources like IBM Mainframes, Oracle using Sqoop, SFTP. Worked extensively in Health care domain.

Skills : HDFS, MapReduce, Pig, Hive,HBase, Sqoop, Oozie, Spark,Scala, Kafka,Zookeeper, Mongo DB Programming Languages: C, Core Java, Linux Shell Script, Python, Cobol

Hadoop Developer Resume Sample
Build Free Resume

Description :

  1. Enhanced performance using various sub-project of Hadoop, performed data migration from legacy using Sqoop, handled performance tuning and conduct regular backups.
  2. Worked with various data sources like RDBMS, mainframe flat files, fixed length files, and delimited files.
  3. Responsible for building scalable distributed data solutions using Hadoop.
  4. Responsible for understanding business needs, analyzing functional specifications and map those to develop and designing programs and algorithms.
  5. Handled delta processing or incremental updates using hive and processed the data in hive tables.
  6. Optimizing MapReduce code, Hive/Pig scripts for better scalability, reliability, and performance.
  7. Involved in creating Hive tables, loading with data and writing hive queries.
  8. Developed Java map-reduce programs as per the client's requirement and used to process the data into Hive tables.
Years of Experience
Experience
7-10 Years
Experience Level
Level
Senior
Education
Education
Masters of Science