Data Engineer-Data Platforms
IBM
**Introduction**
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
A career in IBM Consulting embraces long-term relationships and close collaboration with clients across the globe.
You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including IBM Software and Red Hat.
Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience
Seeking new possibilities and always staying curious, we are a team dedicated to creating the world's leading AI-powered, cloud-native software solutions for our customers. Our renowned legacy creates endless global opportunities for our IBMers, so the door is always open for those who want to grow their career
**Your role and responsibilities**
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution.
Your primary responsibilities include:
* Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements.
* Strive for continuous improvements by testing the build solution and working under an agile framework.
* Discover and implement the latest technologies trends to maximize and build creative solutions
**Required technical and professional expertise**
* Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing.
* Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts.
* Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy.
* SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation.
* Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems
**Preferred technical and professional experience**
* Define, drive, and implement an architecture strategy and standards for end-to-end monitoring.
* Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering,
* Good to have detection and prevention tools for Company products and Platform and customer-facing
Confirm your E-mail: Send Email
All Jobs from IBM