Bangalore
7 days ago
Azure/ AWS with Databricks & PySpark - Lead

We are seeking a highly skilled and experienced Data Engineer with expertise in Azure or AWS cloud platforms, Databricks, and PySpark. The ideal candidate will have a strong background in building and supporting data pipelines and architectures, with an emphasis on ensuring data quality and driving end-to-end data solutions. This role will work closely with cross-functional teams to support BI and analytical reporting needs on data lake and data warehouse platforms.

Key Responsibilities:

Perform source-to-target data analysis and data mappings across cloud platforms. Collaborate with administrators, developers, data engineers, and analysts to ensure seamless functionality delivery. Conduct requirements analysis, coordinating closely with project managers and development teams to drive the project lifecycle. Work with Scrum Masters on maintaining product backlogs and assist with sprint planning. Design, build, and test data pipelines for both real-time and batch processing to meet project needs. Support the development and evolution of the Enterprise Data Platform (EDP) architecture, contributing to the data platform roadmap and architecture initiatives. Ensure data quality and integrity across Azure/AWS cloud data environments, focusing on BI and analytical reporting use cases. Identify potential risks early and communicate them to stakeholders, helping to develop mitigation plans. Build capacity to support data engineering activities as required. Follow agile methodologies and DevOps practices to deliver product features and solutions efficiently. Maintain adherence to prescribed development processes and best practices.

Skills and Qualifications:

6+ years of experience in data engineering or a related field, delivering data solutions in cloud environments. Proven experience in Azure or AWS cloud environments, with expertise in Databricks and PySpark. Hands-on experience with Cloud BI solutions and familiarity with visualization tools (e.g., Power BI or similar). Strong knowledge of agile methodologies and experience with DevOps, DataOps, or DevSecOps practices. Proficiency in using PySpark on Databricks for building scalable data processing solutions. Excellent written, verbal, and interpersonal communication skills. Strong analytical skills with experience in documenting business requirements. Ability to work effectively with cross-functional teams across regions and time zones, leveraging multiple communication methods (Email, MS Teams, meetings). Exceptional prioritization and problem-solving skills.

Skills

Azure/ AWS, Pyspark, Databricks

 

Confirm your E-mail: Send Email