Kesava Reddy
About
Professional Summary:
• Around 10+ years of experience in building ETL solutions and worked on various areas like Snowflake, DBT, Azure Data Factory, Azure Databricks, Spark, Azure synapse analytics, AWS, Hive, Python, Oracle Data Integrator 11g, SQL and Unix.
• Expertise in building/migrating data warehouse on Snowflake cloud database.
• I am knowledgeable about the best practices for integrating Snowflake with other AWS services and architecting end-to-end data solutions on AWS, and staying updated with the latest advancements in Snowflake-AWS integration.
• Expertise in working with Snowflake Multi - Cluster Warehouses, Snow pipe, Internal/External Stages, Store Procedures, Clone, Tasks and Streams.
• Proficient in building and managing data models with debt, ensuring efficient and accurate data transformations.
• Experienced in implementing incremental models in debt, optimizing data pipelines for performance and scalability.
• I am skilled in writing custom debt macros and tests, enhancing the functionality and reliability of data workflows.
• Adept at using DBT's built-in tests to validate data integrity, including unique, not null, and relationships tests.
• Experience with Snowflake cloud data warehouse and AWS S3 bucket for integrating data from multiple source system which include loading data into snowflake tables.
• Experience on Migrating SQL database to Azure Data Lake, Azure data Lake Analytics, Azure SQL Database, Data Bricks and Azure SQL Data Warehouse controlling and granting database access and migrating on premise databases to Azure Data Lake store using Azure Data factory.
• Good understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, Driver Node, Worker Node, Stages, Executors and Tasks.
• Proficient in creating and managing dbt documentation, providing clear and comprehensive insights into data models and transformations. Implemented automated Delta mechanism for delta tables in Azure Databricks.
• Created dynamic code to pull incremental data from on premises to Azure Data Lake gen2.
• Created CI/CD Pipelines in azure Devops.
• Extensively worked using Azure Databricks cloud by using (ADF, ADLS Gen2, SQL, Blob storage).
• Good experience developing applications using Python.
• Experience in writing complex SQL scripts using Statistical Aggregate functions and Analytical functions to support ETL in snowflake cloud data warehouse.
• Develop highly scalable, fault tolerant, maintainable ETL data pipelines to handle vast amounts of data.
• Experience in different project methodologies (SDLC) like Agile / Waterfall model.
• Experience of Relational Modeling and Dimensional Data Modeling using Star & Snowflake schema, De normalization, Normalization, and Aggregations.
• Good knowledge on Azure and loading snowflake DB tables by using Snow pipe concept from Blob storage.
• Worked on TASK creation for scheduling/Automate Snowflake jobs.