Rss

VENKATESWARLU S

Lead Developer - 13 Years of Experience - Near 75924

Occupation:

Software Engineer

Education Level:

Master

Will Relocate:

YES

CollapseDescription

Highly self-motivated IT Professional, Experienced Big Data Engineer/ Data Architect who is passionate about data with a demonstrated history of working in data of telecommunications and Data Management industry with over 14 years of skilful extensive experience in Applications design and development using Distributed Technologies, relational and N0-SQL databases. Proficiency as a Data Architect/Technical Lead and Individual contributor on cross-functional product development teams challenged with ensuring on-time, on-budget target result, Highly skilled in Data Modelling Data replication using Golden gate Performance Optimizations, Query tuning Implementing ETL Data pipelines using Hadoop stack (HDP), Hive, Spark (Scala/PySpark), EMR Apache Cassandra, PL/SQL and C++ Programming, I have a Diploma in Computer Practice and Masters in Computer applications (MCA) from Osmania University. * 14 + Years overall IT experience. * 8 + years of Programming Experience using C++ / VC++ / Python/ PL-SQL/SQL. * 6 + Years of Hands on Experience in Data Ingestion / Bi-direction data replication using Oracle Golden gate * 4 + Years of Experience with Big Data technologies Hadoop, Spark (Python /Scala), AWS /EMR/S3/ Hive NoSQL Data stores (MongoDB, Cassandra), VoltDB. * Data Engineer/DBA/Database Architect: Hands on experience in Software Development Life Cycle (SDLC) of Projects - System study, Analysis, Physical and Logical design, Resource Planning, Coding, Testing and implementing business requirements, Expertise in using Oracle databases versioned until Oracle 12c, Experience in handling large data set migration, Involved in the creation of database objects like tables, views, Sequences, Directories, Synonyms, Collections, Bulk techniques and partition utilization, Proficient working experience with SQL, PL/SQL, Stored Procedures, Functions, Packages, DB Triggers and Indexes. * Hands on experience in Hadoop Eco System includes Spark /Scala HDFS, MapReduce, Yarn, Hive, impala, Oozie, Datastax Cassandra, HBase, Mongo, VoltDB. * Worked with various data sources to analyse the requirement and migrated the data sources into Data Lake using Hadoop Platform to enable analytics Operations and Customer Insight generation and provide input feed for accessories personalization AI Platform. * Design framework for data Ingestion and to create data lake, Bridge tables to generate reports for customer 360 data views. * Designed and implemented ETL Pipe for Address standardization using Locus API lookup. * Designed and Implemented ETL Workflow framework / Listener/Orchestrator using Hadoop / PySpark /Scala / Pythan. * Good understanding in Cassandra & MongoDB implementation * Experienced in creating spark module using data frames and developing ETL job using pyspark, parquest, orc, Hive external tables and data loading into hive, s3 and rdbms. * Monitor Spark application using YARN dashboard. * SQL PL/SQL Code optimization using Explain Plan, SQLTXPLAIN, DBMS_TRACE, DBMS_PROFILER, Tkprof Optimizer Hints Profiler, plan baseline Oracle Statistics Collection, extended statistics histograms, AWR Report ASH Reports, oem. * Experienced in Tuning SQL Statements and Database Tables for enhancing the load Performance in various schemas across databases * Experienced in using the utilities SQL Loader for extracting and loading large volumes of data into the Database. * Hands on experience with TOAD, SQL * PLUS, PL/SQL Developer, Export and Import utilities and tools. * Wrapper utilities development using Shell script for deployment, database import/export, bulk load and unloading, bulk data moving, monitoring, objects validation. * Worked on advance bulk-binding concepts like Bulk collect and FORALL in pl/sql programming for mass data extraction from large datasets. * Implemented Table partitioning like range partitioning, sub partitioning, and composite partitioning. * Creating High level design documents, installation documents, release nots and Usage Bible update. * Generate DDL script generate DDL script. Implemented Table / Index partitioning strategy as per the business / data requirements. * Hands on experience in handling complex data migration planning, execution, Reusable Data transfer frameworks design and development using Oracle SQL, PL/SQL, Shell script, Oracle DBMS Job scheduler, Cron, Autosys. * Experience in My sql database development . * Worked on oracle/db2/MySQL database utilities like data pump utility, DB2MOVE, DB2CLI, multi terabytes production databases. * Hands on experience in modelling the database using ERWIN, MS VISIO. * Hands on experience using Git, Jenkins, Ansible, Docker, kubernetes. * Strong programming skills in SQL, PL/SQL units of procedures, functions, materialized views, Bulk Collections, Ref cursors and packages in software product Development. * Involved in complete configuration/installation of CI/CD pipelines using tools such as Jenkins, Ansible, Maven, Docker. * Over 1 years of experience in using Devops in continues integration/ Continuous Delivery, Infrastructure automation, quality engineering and release management. (Total Experience: 14 years)

Right_template4_bottom

CollapseWork Experience

COMPANY POSITION HELD DATES WORKED

(Pricewaterhouse Coopers) Oracle Solution Architect With 3/2018 - 6/2018
It Payroll Developer 11/2006 - 8/2010
Right_template4_bottom

CollapseAccomplishments

Highlights:

Left_template4_bottom

CollapseJob Skills

Left_template4_bottom

CollapseKeywords

Left_template4_bottom