Job Title: Senior Data Engineer
Job Description:
We are seeking highly skilled and experienced Data Engineer to join our Data and Analytics team. The ideal candidate will have a strong expertise in ETLs (Extract, Transform, Load), Data Modeling, and Databricks to join our dynamic team. The ideal candidate must possess excellent knowledge and skills in SQL and Python, as they will play a crucial role in managing, optimizing, and enhancing our data infrastructure.
Key Responsibilities:
Product Ownership: Serve as the primary owner of data assets, upgrade project, ensuring alignment with business goals, user needs and business analyst.
Stakeholder Collaboration: Work closely with cross-functional teams, including business analyst, data engineering, data science, operations, and business stakeholders, to define and prioritize data and analytics requirements.
Roadmap Development: Collaborate with the detailed product roadmap for data and analytics features, ensuring timely delivery and alignment with project milestones.
Data Integration: Develop and maintain robust, efficient, and scalable ETL processes for data extraction, transformation, and loading tasks from various sources into the data warehouse or data lake.
Quality Assurance: Perform data analysis to identify and resolve data quality issues, inconsistencies, and performance bottlenecks.
Data Governance: Support the development and implementation of data governance and data quality initiatives to maintain updated the technical documentation of data models and system architecture.
Continuous Improvement: Monitor the performance of data systems, gathering feedback and implementing improvements to enhance user experience and business outcomes. Stay updated on industry trends and emerging technologies in data analytics.
Qualifications:
Education: Bachelor's degree in Computer Science, Information Technology, System Engineer, Data Science, or a related field.
Experience: Minimum of 6 years of experience as Senior Data Engineer and at least 3 years using the technologies required.
Dimensional Modeling: Experience with data warehousing concepts, dimensional modeling and ETL processes.
Technical Skills:
Hands-on experience with Databricks for data engineering, including building and managing data pipelines, and optimizing data processing workflows.
Expertise in Python programming language and its associated data libraries (e.g., Pandas, NumPy) for data manipulation and analysis.
In-depth understanding of relational database systems (e.g., PostgreSQL, MySQL).
Proficient in SQL, with the ability to write complex queries and optimize their performance.
Familiarity with cloud platform Azure and experience with its data services (e.g., Azure Data Factory).
Experience with big data technologies (e.g., Hadoop, Spark) and distributed computing frameworks.
Data Governance: Knowledge of data governance and data security principles.
Project Management: Proven track record of managing multiple projects and meet deadlines in a fast-paced environment.
Analytical Mindset: Exceptional analytical and problem-solving skills, with the to troubleshoot and resolve data-related issues.
Communication: Excellent communication and interpersonal skills, capable of collaborating with both technical and non-technical stakeholders.
Leadership: Demonstrated leadership abilities, with experience leading cross-functional teams and driving project success.
Agile Methodology: Familiarity with Agile/Scrum methodologies and experience working in an Agile environment.
English Level: B2 – Upper Intermediate