Berlin, Germany
658 days ago
DataOps Engineer
Veeva Systems is a mission-driven organization and pioneer in industry cloud, helping life sciences companies bring therapies to patients faster. As one of the fastest-growing SaaS companies in history, we surpassed $2B in revenue in our last fiscal year with extensive growth potential ahead.
At the heart of Veeva are our values: Do the Right Thing, Customer Success, Employee Success, and Speed. We're not just any public company – we made history in 2021 by becoming a (PBC), legally bound to balancing the interests of customers, employees, society, and investors.
As a company, we support your flexibility to work from home or in the office, so you can thrive in your ideal environment.
Join us in , committed to making a positive impact on its customers, employees, and communities.
The Role
Veeva OpenData supports the industry by providing real-time reference data across the complete healthcare ecosystem to support commercial sales execution, compliance, and business analytics. We drive value to our customers through constant innovation, using cloud-based solutions and state-of-the-art technologies to deliver product excellence and customer success. The OpenData Global Data Tools team delivers the tools and data processing pipelines to build the global data core for life sciences in 100+ countries. As a DataOps engineer of the Global Data Tools team, you will design the data assembly line that allows deriving insights from data faster and with fewer errors. You will be responsible for creating the tools and the processes used to store, manage and process all compiled data to build the OpenData Reference. What You'll DoBuild DataOps tools for data workflows automation and streamline data processing (e.g., reusable software libraries, tools to orchestrate data processing tasks and its dependencies and components to enable CI/CD integrations)Adopt solutions and tools that adhere to the DataOps best practicesContinually strive to reduce wasted effort, identify gaps and correct them, and improve data development and deployment processesDevelop the ingestion pipelines for raw dataPut in place the building blocks to deliver the data core for life sciencesRequirementsProficient in Python programming language and PySpark3+ years of experience working with Apache SparkPrevious experience building tools and libraries to automate and streamline data processing workflowsExperience running data workflows through DevOps pipelines Experience orchestrating data workflows using state-of-the-art tools (e.g., Airflow, AWS Steps, or similar from other cloud vendors), spawning jobs in a Spark cloud-managed cluster (e.g., EMR, Databricks)Experience with the Delta Lake Architecture and the delta formatNice to HaveHands-on experience using DevOps tools to deploy and administer clusters in a managed Apache Spark platform in the cloud (e.g., Databricks, AWS EMR)Valuable previous experience with Scala or Kotlin programming languagesExperience with Amazon RedshiftPrevious experience in the Life Sciences sectorPerks & BenefitsBenefits package including Restricted Stock Units (RSUs), family health insurance, and contributions to private pension plansAnnual allocations for continuous learning, development & charitable contributionsFitness reimbursementWork anywhere#RemoteGermany
Veeva’s headquarters is located in the San Francisco Bay Area with offices in more than 15 countries around the world.
Veeva is committed to fostering a culture of inclusion and growing a diverse workforce. Diversity makes us stronger. It comes in many forms. Gender, race, ethnicity, religion, politics, sexual orientation, age, disability and life experience shape us all into unique individuals. We value people for the individuals they are and the contributions they can bring to our teams.
Confirm your E-mail: Send Email
All Jobs from Veeva Systems