What you need
• BS in an Engineering or Science discipline, or equivalent experience
• 7+ years of software/data engineering experience using Java, Scala, and/or
Python, with at least 5 years experience in a data focused role
• Experience in data integration (ETL/ELT) development using multiple languages
(e.g., Java, Scala, Python, PySpark, SparkSQL) and modern transformation
methods (e.g., dbt)
• Experience building and maintaining data pipelines supporting a variety of
integration patterns (batch, replication/CDC, event streaming) and data
lake/warehouse in production environments
• Experience with AWS-based data services technologies (e.g., Kinesis, Glue,
RDS, Athena, etc.) and Snowflake CDW
• Experience of working in the larger initiatives building and rationalizing
large scale data environments with a large variety of data pipelines, possibly
with internal and external partner integrations, would be a plus
• Willingness to experiment and learn new approaches and technology
applications
• Knowledge and experience with various relational databases and demonstrable
proficiency in SQL and supporting analytics uses and users
• Knowledge of software engineering and agile development best practices
• Excellent written and verbal communication skills