Data Engineer-Data Platforms-AWS
IBM
**Introduction**
In this role, you will work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
A career in IBM Consulting embraces long-term relationships and close collaboration with clients across the globe.
You will collaborate with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio, including IBM Software and Red Hat.
Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you will be supported by mentors and coaches who will encourage you to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground-breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and learning opportunities in an environment that embraces your unique skills and experience.
**Your role and responsibilities**
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform
* Responsibilities:
* Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS
* Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform
* Experience in developing streaming pipelines
* Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc
**Required technical and professional expertise**
* Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala;
* Minimum 3 years of experience on Cloud Data Platforms on Azure;
* Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB
* Good to excellent SQL skills
* Exposure to streaming solutions and message brokers like Kafka technologies
**Preferred technical and professional experience**
* Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Confirm your E-mail: Send Email
All Jobs from IBM