About the role
Metronome is building a high-volume data collection and processing system to support usage-based billing functionality. We ingest tens of billions of events per month that represent our customers' product usage data (also known as metering data) to then calculate revenue on a monthly basis. A key part of our product strategy is to make Metronome a connected system that streams out data to our users' ecosystem, including data warehouses, business systems, and BI tools.
In this role, you will be responsible for maturing and scaling our architecture and laying the foundations for new product offerings based on machine learning and data exploration capabilities. You will manage our Spark and Kafka-based data infrastructure which powers several of our critical product functionalities.
This role offers high visibility and impact, as you will enable our users to solve billing challenges using critical product usage and revenue data.
What You Might Work On
Build batch and streaming pipelines capable of processing millions of events per second and generating billions of downstream records within a single job
Design how Metronome’s complex domain model and billing logic can be implemented on top of our data platform
Own the usage of Spark across all of Metronome. Champion existing and net new use cases that can be successfully run on top of Spark as a platform
Shape the direction of Metronome’s data platform as the second specialize data engineering hire at the company
Influence the overall architecture of Metronome. Drive the technical direction around usage of batch, streaming and real time compute primitives. Define how key technologies like Spark, Flink and Druid fit into our company-level strategy.
Impact You'll Have
All of the above work is critical to the core of Metronome
The Metronome team is more well-rounded because you bring your unique personality to work and help create an inclusive fun environment focused on helping our customers get the most value out of Metronome.
Qualifications
6+ years of industry experience in a technical role
2+ years experience in a backend software engineering role
2+ years experience building data pipelines in Spark or Flink
You write high quality, well tested code to meet the needs of your customers and stakeholders
Excitement to dive deep into our business domain and work with product, design, and our customers to deliver the best solutions possible
We don't filter based on current expertise, so at Metronome you will learn
AWS (S3, RDS, API Gateway, ECS, Fargate, Lambda, MKS, and more!)
Infrastructure as Code (Terraform, Serverless Framework)
Languages (Python, Typescript, Java - for working with Kafka)
CI/CD (AWS CodePipeline & CodeDeploy, CircleCI)
Compensation
The estimated base salary range for this role is $150,000 - $217,000. In addition to your base salary, Metronome offers a competitive total rewards package, including but not limited to, market-benched equity, sales incentive pay (for eligible roles), comprehensive health benefits, and other benefits listed below.
The actual base salary will vary based on factors including market value, individual qualifications objectively assessed during the interview process, and previous experience. The listed range above should serve as a guideline and may be modified at any time.
We believe that compensation reflects the expected impact you will have at the company, relative to the market value of your role. We also conduct an annual pay audit to ensure pay is fair, indexed to market value, and that pay takes into account continued performance at Metronome. If you would like to learn more about our philosophy or about why we are all billing nerds, send us a message. We’d love to talk!