Role: AWS Data Engineer
Experience Range: 6 to 10 Years
Work Location: Bangalore / Chennai / Kochi / Trivandrum
Key Role:
§ The ideal candidate will have a strong background in optimizing computing performance and running data pipelines on the AWS cloud.
§ You will be responsible for developing ETL processes, deploying DBT models into AWS Redshift, and automating CI/CD pipelines using Jenkins.
§ The goal for this role is to set the foundational data models inside of AWS Redshift Serverless and create the pipelines for transformations from a git repository to deployment.
Responsibilities:
§ Design and implement scalable data solutions using AWS services (Lambda, Kinesis, S3, Glue, Redshift, RDS, SageMaker).
§ Develop and optimize ETL processes to reduce data processing time and storage costs.
§ Set up and manage Jenkins CI/CD pipelines for automated deployment of dbt models.
§ Write and maintain scripts in Python & SQL for data integration and automation.
§ Develop data models, dictionaries, and data warehouses to improve data accuracy and retrieval. Collaborate with cross-functional teams and mentor junior engineers.
§ Analyze and troubleshoot data quality issues and monitor performance of data integration processes.
§ Design secure AWS environments using CloudFormation templates with the UST IAM team.
§ Qualifications:
§ Proficiency in AWS services and cloud infrastructure management.
§ Strong programming and scripting skills (Python, SQL).
§ Experience with CI/CD tools, particularly Jenkins.
§ Excellent problem-solving and debugging skills.
§ Proven track record in optimizing data pipelines and reducing operational costs.
§ Strong collaboration and communication skills.
Preferred Skills:
§ Experience with dbt and deploying models into AWS Redshift.
§ Knowledge of data warehousing concepts and best practices.
§ Airflow/Other Orchestra experience Familiarity with version control systems (e.g., Git).