South San Francisco, California, USA
84 days ago
Senior Data Platform Engineer
About Zipline Do you want to change the world? Zipline is on a mission to transform the way goods move. Our aim is to solve the world’s most urgent and complex access challenges by building, manufacturing and operating the first instant delivery and logistics system that serves all humans equally, wherever they are. From powering Rwanda’s national blood delivery network and Ghana’s COVID-19 vaccine distribution, to providing on-demand home delivery for Walmart, to enabling healthcare providers to bring care directly to U.S. homes, we are transforming the way things move for businesses, governments and consumers. The technology is complex but the idea is simple: a teleportation service that delivers what you need, when you need it. Through our technology that includes robotics and autonomy, we are decarbonizing delivery, decreasing road congestion, and reducing fossil fuel consumption and air pollution, while providing equitable access to billions of people and building a more resilient global supply chain.   Join Zipline and help us to make good on our promise to build an equitable and more resilient global supply chain for billions of people. About You and The Role  

Zipline’s Data team is responsible for powering data-driven decision making across the entire organization. In order for us to execute against our mission, we need to build a solid data foundation and ensure that every area of the business has access to highly reliable data. 

We are hiring a talented and experienced Senior Data Engineer to join our small, but growing Data team and play a critical role in designing and executing on a robust and forward-looking data strategy for the company. Our team owns the data pipelines and tools that provide secure, reliable, and accessible data that enable team members to derive actionable insights. Doing our job well means that we enable the entire organization’s ability to make more informed decisions, innovate faster, and serve our customers better. 

In this role, you will work directly with our Data, Engineering, Operations, Go-to-Market, and Finance teams to support the organization's data processing and analytics needs. You will be the internal expert on all things data engineering, empowering your peers with your expertise so that together, we can build a world-class data culture. This is a unique opportunity to directly influence not only our data systems, but also our drones and global operations. The ideal candidate will help us design systems that support the company’s needs today and many years into the future. 

What You'll Do   Help build, maintain, and scale our data pipelines that bring together data from various internal and external systems into our data warehouse.  Partner with internal stakeholders to understand analysis needs and consumption patterns. Partner with upstream engineering teams to enhance data logging patterns and best practices.  Participate in architectural decisions and help us plan for the company’s data needs as we scale.  Adopt and evangelize data engineering best practices for data processing, modeling, and lake/warehouse development.   Advise engineers and other cross-functional partners on how to most efficiently use our data tools. What You'll Bring  Advanced knowledge of Python & SQL Advanced proficiency with AWS cloud  Experience working directly on a data-intensive product and maintaining business critical systems  Experience building and maintaining  data pipelines & ETL/ELT processes Experience with cloud data warehouses such as Snowflake and other OLAP databases  Experience developing and maintaining customer-facing APIs  Knowledge of cloud infrastructure best practices and experience working with continuous integration and continuous deployment processes and tools Ability to communicate effectively with stakeholders to define requirements and timelines  Passion and excitement to serve as a technical mentor and thought leader and interest in evangelizing data engineering best practices  Passion and excitement to build a strong data foundation for the company Nice to haves:  Experience building streaming applications or pipelines using  async messaging services or distributed streaming platforms like Apache Kafka  Knowledge of Airflow or some other orchestration tool   Experience with Spark or PySpark Experience with supply chain, manufacturing, and/or logistics datasets Knowledge of data governance best practices and experience working in a highly regulated industry Experience in schema design and dimensional data modeling, ideally using tools like dbt   What Else You Need to Know    The starting cash range for this role is $160,000 - 190,000. Please note that this is a target, starting cash range for a candidate who meets the minimum qualifications for this role. The final cash pay for this role will depend on a variety of factors, including a specific candidate's experience, qualifications, skills, working location, and projected impact. The total compensation package for this role may also include: equity compensation; discretionary annual or performance bonuses; sales incentives; benefits such as medical, dental and vision insurance; paid time off; and more.   Zipline is an equal opportunity employer and prohibits discrimination and harassment of any type without regard to race, color, ancestry, national origin, religion or religious creed, mental or physical disability, medical condition, genetic information, sex (including pregnancy, childbirth, and related medical conditions), sexual orientation, gender identity, gender expression, age, marital status, military or veteran status, citizenship, or other characteristics protected by state, federal or local law or our other policies.   We value diversity at Zipline and welcome applications from those who are traditionally underrepresented in tech. If you like the sound of this position but are not sure if you are the perfect fit, please apply!
Confirm your E-mail: Send Email