We are seeking a highly motivated Lead AWS Data Engineer who thrives in building efficient and scalable data pipelines on AWS. With your technical expertise in AWS ETL services and the ability to lead and mentor others, you will be responsible for designing, developing, and managing large-scale data pipelines, ensuring data quality, security, and performance.Perficient is always looking for the best and brightest talent and we need you! We’re a quickly growing, global digital consulting leader, and we’re transforming the world’s largest enterprises and biggest brands. You’ll work with the latest technologies, expand your skills, and become a part of our global community of talented, diverse, and knowledgeable colleagues.
RESPONSIBILITIES
• Design, develop, and implement secure and scalable data pipelines on AWS utilizing services like Glue, Lambda, Step Functions, and Kinesis.
• Lead a team of ETL engineers in designing and implementing data integration solutions.
• Optimize data pipelines for performance, scalability, and cost-effectiveness.
• Develop and maintain documentation for data pipelines, ensuring transparency and knowledge transfer.
• Implement data quality checks and monitoring to ensure data integrity throughout the ETL process.
• Collaborate with data analysts, data scientists, and business stakeholders to identify data needs and define ETL requirements.
• Secure data access using AWS Identity and Access Management (IAM) policies.
• Automate data pipeline deployment and management using Infrastructure as Code (IaC) principles with Terraform.
• Stay up-to-date on the latest advancements in AWS data services and ETL best practices.
• Troubleshoot and resolve complex data pipeline issues.
QUALIFICATIONS
• 5+ years of experience designing and implementing data pipelines on AWS.
• Proven experience with AWS ETL services such as Glue, Lambda, Step Functions, and Kinesis.
• Strong understanding of data warehousing concepts and data modeling principles.
• Experience with SQL and scripting languages like Python or Bash for data manipulation and automation.
• Experience with data quality tools and techniques.
• Excellent communication, collaboration, leadership, and problem-solving skills.
• Ability to mentor and guide junior team members.
Additional skills:
• Experience with cloud-native databases like Amazon DynamoDB and Amazon Aurora.
• Experience with data warehousing platforms like Redshift or Snowflake.
• Experience with containerization technologies like Docker and Kubernetes.
• Experience with CI/CD methodologies for data pipelines.
Preferred education and certification:
• Bachelor’s degree in computer science or related field
• AWS Certified Data Analytics - Specialty or AWS Certified Solutions Architect - Associate.