Rss

Mahendra Kumar R

System Engineer - 12 Years of Experience - Near 85021

Occupation:

System Engineer

Location:

Phoenix, AZ

Education Level:

Master

Will Relocate:

YES

CollapseDescription

Profile: * 12 + years of total IT experience * 3 + years of Big Data Hadoop experience * 10 years of Java experience Summary * Certified Cloudera Hadoop Developer (CCDH). * Good understanding on Kafka, Spark batch and Spark streaming jobs written code in scala language.. * Good understanding on Kafka related topics, publisher, consumer, partitions, consumer groups and offset management. * Sound understanding on Hadoop Framework, MapReduce and ecosystem components of Hadoop - Hive, Pig, Sqoop * Experience in working with various Hadoop ecosystems infrastructures such as MapReduce, Hive, PIG, SQOOP and Apache Flume. * Sun Certified Java developer with 10 years of work experience involving in various phases of software development Life cycle (SDLC) including system analysis, design, development, implementation, testing, & production of Web applications using different areas of J2EE stack. * Good hands on experience of using Eclipse 3.x, Struts, Spring, Hibernate 3.0, BEA WebLogic Application Server, IBM WebSphere Application Server with WSAD & RAD, JBoss Application Server. Also familiar with Web Server Tomcat 6.x, databases Oracle, MySQL, DB2 and other tools like ANT, Maven, Enterprise Architecture, Rational Rose, Security API like JAAS etc. * Proficient in designing and developing web front end, component-based, object-oriented systems for building multi-tier architecture with hands on development expertise in front end UI layer, application layer including middleware and core business frameworks and back-end (database) layer integration. * Strong experience in building distributed applications involving high performance and high transaction, multithreaded based product and system environments. * Effectively implementing principles of Scrum and coaching the team on agile practices. * Lead daily scrums to remove blockers within the team. * Protecting the team and be the single point of contact for other team's dependencies, blockers/issues on PLP. * Responsible for Sprint Planning, Retrospectives, Showcases, Release planning and also grooming the product backlog. * Educating the product owner (s) on how to maximize ROI/productivity and meet their objectives via scrum methodology. Also assisting prioritizing the Zen desk tickets (defects). * Tracking team velocity, burn down charts and planning the sprints accordingly. * Improving productivity by improving the development process and advocating continuous integration and test driven development * Extensively worked on open source control frameworks Struts, Hibernate, and spring and have in depth knowledge of design patterns. * Expert in Java Web Services, WSDL, UDDI & SOA (Service Oriented Architecture) and have extensive knowledge on financial product development. * Solid experience in building n-tier web enabled applications using JSPs, Servlets, EJB 3.0, Struts, ORACLE & MS SQL Server with adeptness on application servers JBoss, WebLogic, WebSphere & Tomcat. * Extraordinary technical knowledge in database programming using SQL, and PL/SQL and has great insight on data modeling, database design, and normalization. * Excellent skills in building strong websites confirming Web 2.0 standards using valid code and table free sites with XML, XSL, DTD, XML schema, JavaScript, JSTL, XHTML, DHTML. * Extensively worked on JMS using point-point, publisher/subscriber messaging domains and am well versed in design and analysis using UML methodologies, and testing using JUnit.Proficiency in Integrated Development Environments Eclipse and RAD with good insight of the following protocols HTTPS, TCP/IP, FTP, & SOAP. * Great leadership & mentoring skills along with good work ethics. Experienced with batch processing of data sources using Apache Spark. * Part of the design team of SCD and Data Validation. * Used Shell scripting for automation of scripts. * Worked on QA support activities, test data creation and Unit testing activities. Environment: HDP Hadoop Platform, Spark-streaming, Spark-batch, Spark-SQL, Kafka, Scala, Java, HBase, Solr, Cassandra, MySQL. Senior Hadoop Developer/Lead, Experienced with batch processing of data sources using Apache Spark. * Involved in ingesting data into IDW staging directly through Spark Sqoop to push data into HDFS. * Part of the design team of SCD and Data Validation. * Used Shell scripting for automation of scripts. * Worked on QA support activities, test data creation and Unit testing activities. * Proposed an automated system using Shell script to implement import using Sqoop. Environment: HDP Hadoop Platform, Spark-Core, Spark-JDBC connector, Spark-SQL, Sqoop, Scala, Java, MySQL. Senior Hadoop Developer/Lead, Employer: Wisdom Info Tech Client: T-Mobile, WA, USA Project: IDW (Integrated Data Warehouse) Oct, 15 - June, 16 IDW (Integrated Data Warehouse) project's purpose is to improve the modernizing of T-Mobile's Business Intelligence solutions. The IDW on a Hadoop Data Lake is a scalable BI platform that can adapt to the speed of the business by providing relevant, accessible, timely, connected, and accurate data. This new data warehouse is an element of the larger BI ecosystem that spans reporting and analytical technologies. The intent of this project is to address the specific pain points in our current data warehouse, e.g., latency, integration, cost, security and multiple versions of the truth. IDW Project is complementary to our technical solutions regarding Big Data and discovery analytics. Responsibilities: * Involved in ingesting data into IDW staging directly from BEAM, (an inbuilt component for ingesting real time data into Hadoop) using Apache Storm to push data into HDFS. * Used OOZIE Operational Services for batch processing and scheduling workflows dynamically to run multiple Hive, shell script and Pig jobs which run independently with time and data availability. * Part of the design team of the various generic components such as SCD and Data Validation. * Development of the solution for several data ingestion channel and patterns, also involved in production issues. * Extensively worked on creating End-End data pipeline orchestration using Oozie. * Worked on QA support activities, test data creation and Unit testing activities. * Used HBase in accordance with Hive/Pig as per the requirement. * Worked on PIG joins, and Join optimization, processing the incremental data using Hadoop. * Created Oozie jobs using Sqoop to export the data from Hadoop to Teradata development. * Involved in developing a customized in built tool Data Movement Framework (DMF) for ingesting data from external and internal sources into Hadoop using Sqoop, Shell script. * Proposed an automated system using Shell script to implement import using Sqoop. * Worked in Agile development approach and managed the Hadoop teams of various Sprints Environment: HDP Hadoop Platform, HDFS, Hbase, Hive, impala, Java, Sqoop, SolR Oracle, MySQL, Storm. Hadoop Developer, Client: AmFam Insurance, WI, USA Project Life Claims Nov' 14 - Oct' 15 The purpose of the project is to perform the analysis on the claims historical data for effectiveness and validity of controls and to store terabytes of information from flat files, DB2, Oracle which is generated from different Lines of Business. This data will be stored in Hadoop file system and processed using MapReduce jobs, which intern includes getting the raw data, process the data to obtain controls and redesign history information, extract analytical data out of the controls history and Export the information for further processing. Responsibilities: * Sqoop and Flume are used to load raw data from various data sources onto Hadoop cluster. * Applying various transformations using to Pig to process the data. * Using Skewed join in PIG to process the data. * Created Hive tables on TOP of raw data which indeed partitioned by date which further processed results in a tabular format. * Developed Map-Reduce programs to optimize 'writes' and parse data in HDFS obtained from various data sources. * Created Hive internal/external tables with proper static and dynamic partitions. * Using Hive analyzed unified historic data in HDFS to identify issues & behavioral patterns. * Exports Hadoop processing data to Netezza. Environment: Hadoop, Flume, Kafka, Hive, Sqoop, Pig, Hive, impala, Netezza, Java, Eclipse, Linux, Oracle.

Right_template4_bottom

CollapseAccomplishments

Highlights:

Left_template4_bottom

CollapseKeywords

Left_template4_bottom