Data Engineer-Data Platforms-Google
IBM
**Introduction**
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology
**Your role and responsibilities**
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing.
Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets.
In this role, your responsibilities may include:
"Must Have:
* End to End functional knowledge of the data pipeline/transformation, implementation that the candidate has done, should understand the purpose/KPIs for which data transformation was done.
* Expert in SQL - can do data analysis and investigation using Sql queries
* Implementation Knowledge Advance Sql functions like - Regular Expressions, Aggregation, Pivoting, Ranking, Deduplication etc.
* BigQuery and BigQuery Transformation (using Stored Procedures) Data modelling concepts - Star & Snowflake schemas, Fact & Dimension table, Joins, Cardinality etc
* GCP Services related to data pipelines like - Workflow, Cloud Composer, Cloud Schedular, Cloud Storage etc
* Understanding of CI/CD & related tools - Git & Terraform Other GCP Services like - Dataflow, Cloud Build, Pub/Sub, Cloud Functions, Cloud Run, Cloud Workstation etc
* BigQuery Performance tuning, Python based API development exp Spark development exp
**Required technical and professional expertise**
* Develop/Convert the database (Hadoop to GCP) of the specific objects (tables, views, procedures, functions, triggers, etc.) from one database to another database platform Implementation of a specific Data Replication mechanism (CDC, file data transfer, bulk data transfer, etc.).
* Expose data as API Participation in modernization roadmap journey Analyze discovery and analysis outcomes Lead discovery and analysis workshops/playbacks Identification of the applications dependencies, source, and target database incompatibilities.
* Analyze the non-functional requirements (security, HA, RTO/RPO, storage, compute, network, performance bench, etc.).
* Prepare the effort estimates, WBS, staffing plan, RACI, RAID etc. .
* Leads the team to adopt right tools for various migration and modernization method
**Preferred technical and professional experience**
* You thrive on teamwork and have excellent verbal and written communication skills.
* Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions
* Ability to communicate results to technical and non-technical audiences
Confirm your E-mail: Send Email
All Jobs from IBM