Insulet started in 2000 with an idea and a mission to enable our customers to enjoy simplicity, freedom and healthier lives through the use of our Omnipod® product platform. In the last two decades we have improved the lives of hundreds of thousands of patients by using innovative technology that is wearable, waterproof, and lifestyle accommodating.
We are looking for highly motivated, performance driven individuals to be a part of our expanding team. We do this by hiring amazing people guided by shared values who exceed customer expectations. Our continued success depends on it!
Position Overview
Insulet Corporation, maker of the OmniPod, is the leader in tubeless insulin pump. We are seeking a highly skilled Senior Data Operations Engineer to join our team. This role involves designing and implementing robust data architectures using Databricks, AWS, and Azure platforms. The ideal candidate will have extensive experience in big data technologies, cloud platforms, and data engineering. We are looking for a high performing individual who is passionate about data to provide data pipeline support, operations and engineering across multiple data domains and data platforms. Initially you will proactively identify and resolve data pipeline issues to ensure high quality on time data for our clients. In addition, you will optimize pipeline performance and understand how our clients use their data
We are a fast-growing company that provides an energetic work environment and tremendous career growth opportunities.
Key Responsibilities
Education & Experience
Bachelor’s degree in Mathematics, Computer Science, Electrical/Computer Engineering, or a closely related STEM field is required. (Four years relevant work experience may be considered in lieu of educational requirement)5+ years’ experience in hands-on data operations including data pipeline monitoring, support and engineering.Experience in data Quality Assurance, Control and Lineage for large datasets in relational/non-relational databases is strongly preferred.Experience managing robust ETL/ELT pipelines for big real-world datasets that could include messy data, unpredictable schema changes and/or incorrect data types is strongly preferred.Proficiency in Databricks, AWS and/or Azure, or similar cloud platforms.Experience monitoring, and supporting data pipelines that are fast, scalable, reliable, and accurate.Understand clients' data requirements to meet client data operations needsStrong analytical and problem-solving skills, with a keen attention to detailEffective communication and collaboration abilitiesExperience with big data technologies such as Apache Spark, Hadoop, and Kafka.Expertise in data modeling, ETL processes, and data warehousing.Experience with both batch data processing and streaming data.Experience in implementing and maintaining Business Intelligence tools linked to an external data warehouse or relational/non-relational databases is required.Skills/Competencies
Excellent design and development experience with SQL and NoSQL database, OLTP and OLAP databasesKnowledge in non-relational databases (MongoDB) is a plus.Demonstrated knowledge of managing large data sets in the cloud (Azure, SQL, AWS etc) is required.Knowledge of ETL and workflow tools (Databricks, Azure Data Factory, AWS Glue, etc) is a plus.Demonstrated knowledge of building, maintaining and scaling cloud architectures (Azure, AWS, etc), specifically cloud data tools that leverage Spark, is preferred.Experienced coding abilities in Python and SQL scripting languages.Demonstrated familiarity with different data types as inputs (e.g. CSV, XML, JSON, etc).Demonstrated knowledge of database and dataset validation best practices.Demonstrated knowledge of software engineering principles and practices.Ability to communicate effectively and document objectives and proceduresNOTE: This position is eligible for 100% remote working arrangements (may work from home/virtually 100%; may also work hybrid on-site/virtual as desired). #LI-Remote