Cranberry Township, PA, USA
5 days ago
Data Engineer IV

Engineer IV, Software – Big Data

Omnicell is the world leader in pharmacy robotics, and we’re expanding beyond inventory management into inventory analytics. The OmniSphere helps hospitals and health systems understand how meds flow through their business, from the loading dock to the nurse’s glove, and then apply clinical expertise and advanced machine learning to uncover opportunities to adjust that flow to improve safety, cost, efficiency, and patient outcomes. And the next step for us is to help busy clinicians act on those opportunities by building efficient, industry-leading workflows. 

To do that, we take terabytes of data from thousands of devices and translate it to simple, actionable steps our clients can take to improve their overall performance. This is achieved through a sleek new microservices architecture primarily composed of Kafka, Spark, PostgreSQL, .NET Core, and Angular all running in AWS. 

 

Responsibilities:  

Translate business requirements into effective technology solutions  Help lead the design, architecture and development of the Omnicell Data Platform  Conduct design and code reviews  Resolve defects/bugs during QA testing, pre-production, production, and post-release patches Analyze and improve efficiency, scalability, and stability of various system resources once deployed Provide technical leadership to agile teams – onshore and offshore: Mentor junior engineers and new team members, and apply technical expertise to challenging programming and design problems  Help define the technology roadmap that will support the product development roadmap  Continue to improve code quality by tracking, reducing and avoiding technical debt Focus on always putting the customer first.

 

Required Knowledge and Skills

Deep development experience of distributed/scalable systems and high-volume transaction applications, participating in architecting big data projects Hands-on programming experience in Scala, Python, and other object-oriented programming languages.  Expert in using Big data technologies like, Apache Kafka, Apache Spark, Real Time streaming, Structured Streaming, Delta lake Excellent analytical and problem-solving skills. Energetic, motivated self-starter that is eager to excel with excellent inter-personal skills. Expert in knowing a balance driving the right architecture but realizing the realities of having customers and the need to ship software

Basic Requirements: 

Education: Bachelor’s degree preferred; may consider relevant experience in lieu of a degree 8+ years’ experience in software engineering with a degree; 12+ years’ experience in software engineering in lieu of a degree Experience developing ETL processing flows with MapReduce technologies like Spark and Hadoop Experience developing with ingestion and clustering frameworks such as Kafka, Zookeeper, YARN

Preferred Knowledge and Skills:

Master’s degree Hands-on working experience in cloud infrastructure like AWS. Able to scale cade and deploy applications in the public cloud using technologies like AWS, Lambda, Docker, Kubernetes. Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O Experience working with healthcare specific data exchange formats including HL7 and FHIR. Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming Experience with various messaging systems, such as Kafka or RabbitMQ Working knowledge of Databricks, Team Foundation Server, TeamCity, Octopus deploys and DataDog

 

Work Conditions:

Corporate office/lab environment. Ability to travel 10% of the time.
Confirm your E-mail: Send Email