Abbott Park, Illinois, USA
6 days ago
Senior Data Engineer

Abbott is a global healthcare leader that helps people live more fully at all stages of life. Our portfolio of life-changing technologies spans the spectrum of healthcare, with leading businesses and products in diagnostics, medical devices, nutritionals and branded generic medicines. Our 114,000 colleagues serve people in more than 160 countries.

Interested in applying your wealth of technical knowledge and experience towards an opportunity in the medical field and improving the lives of people with diabetes?  The candidate will be responsible for big data engineering, data wrangling, and data analysis in the Cloud. The role will also contribute to defining and implementing Big Data Strategy for the organization along with driving the implementation of IT solutions for the business.  The candidate will be working with other data engineers, data analysts and data scientists to focus on applying data engineering, data science and machine learning approaches to solve business problems.  
 
As a senior member of the Data Engineering & Analytics team, you will be building big data collection and analytics capabilities to uncover customer, product and operational insights. Candidate should be able to work on a geographically distributed team to develop data pipelines capable of handling complex data sets quickly and securely as well as operationalize data science solutions. Additionally, they will be working in a technology-driven environment utilizing the latest tools and techniques such as Databricks, Redshift, S3, Lambda, DynamoDB, Spark and Python. 

 

The candidate should have a passion for software engineering to help shape the direction of the team. Highly sought-after qualities include versatility and a desire to continuously learn, improve, and empower other team members. Candidate will support building scalable, highly available, efficient, and secure software solutions for big data initiatives.  

 

 

Responsibilities 

Design and implement data pipelines to be processed and visualized across a variety of projects and initiatives 

Develop and maintain optimal data pipeline architecture by designing and implementing data ingestion solutions on AWS using AWS native services. 

Design and optimize data models on AWS Cloud using Databricks and AWS data stores such as Redshift, RDS, S3 

Integrate and assemble large, complex data sets that meet a broad range of business requirements 

Read, extract, transform, stage and load data to selected tools and frameworks as required and requested 

Customizing and managing integration tools, databases, warehouses, and analytical systems 

Process unstructured data into a form suitable for analysis and assist in analysis of the processed data 

Working directly with the technology and engineering teams to integrate data processing and business objectives 

Monitoring and optimizing data performance, uptime, and scale; Maintaining high standards of code quality and thoughtful design 

Create software architecture and design documentation for the supported solutions and overall best practices and patterns 

Support team with technical planning, design, and code reviews including peer code reviews 

Provide Architecture and Technical Knowledge training and support for the solution groups 

Develop good working relations with the other solution teams and groups, such as Engineering, Marketing, Product, Test, QA. 

Stay current with emerging trends, making recommendations as needed to help the organization innovate 

 

 

Required Qualifications 

Bachelors Degree in Computer Science, Information Technology or other relevant field 

At least 3 to 8 years of recent experience in Software Engineering, Data Engineering or Big Data 

Ability to work effectively within a team in a fast-paced changing environment 

Knowledge of or direct experience with Databricks and/or Spark. 

Software development experience, ideally in Python, PySpark, Kafka or Go, and a willingness to learn new software development languages to meet goals and objectives. 

Knowledge of strategies for processing large amounts of structured and unstructured data, including integrating data from multiple sources 

Knowledge of data cleaning, wrangling, visualization and reporting 

Ability to explore new alternatives or options to solve data mining issues, and utilize a combination of industry best practices, data innovations and experience 

Familiarity of databases, BI applications, data quality and performance tuning 

Excellent written, verbal and listening communication skills 

Comfortable working asynchronously with a distributed team 

 

 

Preferred Qualifications 

Knowledge of or direct experience with the following AWS Services desired S3, RDS, Redshift, DynamoDB, EMR, Glue, and Lambda. 

Experience working in an agile environment 

Practical Knowledge of Linux



The base pay for this position is $83,000.00 – $166,000.00. In specific locations, the pay range may vary from the range posted.

Confirm your E-mail: Send Email