Mission:
In the Go to Market Analytics Centre of Excellence our mission is to deliver impact by building machine learning products to optimize pricing and marketing investments and provide guidance to our sales organization. The products we are building are of high value to the company, with the ambition to affect the pricing decisions and marketing investments for all of HP products, globally and across all distribution channels, utilizing advanced analytics, machine learning and AI.
To get there, we are looking for a Data Engineer to implement best engineering practices and to turn data from a raw resource into structured, scalable, high-quality, and compliant fuel feeding our AI-powered products and solutions.
Please note that it's a fixed-term role with a 12 month contract. Extension to permanent position is possible but not guarantee.
What makes us stand out:
Best in class technologies.Diverse learning opportunities.Super collaborative environment and international experience.What you will be doing:
Co-own as part of a cross-functional team all stages of design and development for complex, secure and performant data systems and solutions, including design, analysis, coding, testing, and integration of structured/unstructured data.Work with business stakeholders with diverse backgrounds and own team-level objectives.Own operational excellence objectives targeting reliability, quality, and release cadence, while perfecting operational economies of scale through extreme automation.Ensure compliance of data architecture, systems, and products with data policies (incl. privacy), architecture, security and quality guidelines and standards; andCo-design and lead architecture evolution activities through innovation and adoption of new technologies.What we are looking for:
Bachelor's or Master's degree in Computer Science, Information Systems, Engineering, or equivalent.1-2 years of experience in relevant roles.Hands-on expertise with data engineering systems (Databricks, SQL engines), languages (Python, SQL) and frameworks (Apache Spark) to explore, mine, transform and cleanse data.Exposure in designing, building and operating cloud-native products and solutions, with experience in at least one major cloud vendors (e.g. AWS).Solid foundations on the concepts and principles of how data-intensive systems work under the hood (e.g. consistency, sharding, horizontal scaling).Experience with modern software lifecycle management tools, including version control (git), test plan pyramids and quality test automation frameworks, continuous integration and deployment processes and tools.Experience in delivering software with agile project management tools and practices.Move fast, can-do, and execution-biased attitude.#LI-POST