Responsibilities:
· AWS Data Architecture Design, implement, and maintain scalable and robust data architectures on AWS.
· Utilize AWS services such as S3, Glue, and Redshift for data storage, processing, and analytics.
· ETL Development with PySpark Develop and implement ETL processes using PySpark to extract, transform, and load data into AWS data storage solutions.
· Collaborate with cross-functional teams to ensure seamless data flow across different AWS services.
Qualifications:
· Proven experience as a Data Engineer with a focus on AWS and PySpark.
· Strong proficiency in PySpark for ETL and data processing tasks.
· Hands-on experience with AWS data services such as S3, Glue, Redshift, and others.
· Solid understanding of data modeling, SQL, and database design.