Temple, Texas, USA
5 days ago
BI Data Engineer II

McLane is one of the largest and most stable supply chain services leaders in the United States. We’ve been at the forefront of delivering retail and restaurant solutions for convenience stores, mass merchants, drug stores, and chain restaurants for over 125 years. Our vision is to be an agile, innovative, and unified supply chain partner that delivers a superior customer experience, improves the lives of our teammates and community, and produces best-in-class returns.

Designs, develops and maintains data architecture and pipelines that adhere to Data Pipeline (i.e. ELT) principles and business goals.

This is a hybrid position which will require the candidate to report and work from the office three days a week. Therefore, interested candidates should be within a 50-minute radius from Temple, TX.

BENEFITS:

Day 1 Benefits available: medical, dental, and vision insurance, FSA/HSA and company-paid life insurance. Get paid early. Get paid fast. 401(k) with annual company match.Paid holidays, Responsible Off (RTO), college tuition reimbursement, and more

ESSENTIAL JOB FUNCTIONS / PRINCIPAL ACCOUNTABILITIES:

Designs, develop, and maintains data architecture and pipelines that adhere to ELT principles and business goals.Solves intermediate data problems to deliver insights that helps business achieve its goals.Engineer effective features for modelling in close collaboration with data scientists and businessesAssists in the evaluation, implementation and deployment of emerging tools and process for analytics data engineering to improve productivity and quality.Partners with machine learning engineers, BI, and solutions architects to develop technical architectures for strategic enterprise projects and initiatives.Fosters a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions.Develops and delivers communication and education plans on analytic data engineering capabilities, standards, and processes.Learns about machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics as necessary to carry out role effectively.

MINIMUM SKILLS AND QUALIFICATION REQUIREMENTS:

Bachelor’s degree in computer science, statistics, engineering, or a related field.3-5 years of experience required.Experience with designing and maintaining data warehouses and/or data lakes with big data technologies such as Spark/Databricks, or distributed databases, like Redshift and Snowflake, and experience with housing, accessing, and transforming data in a variety of relational databases.Experience in building data pipelines and deploying/maintaining them following modern DE best practices (e.g., DBT, Airflow, Spark, Python OSS Data Ecosystem).Knowledge of Software Engineering fundamentals and software development tooling (e.g., Git, CI/CD, JIRA) and familiarity with the Linux operating system and the Bash/Z shell.Experience with cloud database technologies (e.g., Azure) and developing solutions on cloud computing services and infrastructure in the data and analytics space.Basic familiarity with BI tools (e.g., Alteryx, Tableau, Power BI, Looker).Expertise in ELT and data analysis, SQL primarily.Conceptual knowledge of data and analytics, such as dimensional modelling, reporting tools, data governance, and structured and unstructured data.

WORKING CONDITIONS:

Office Environment.

Candidates may be subject to a background check and drug screen, in accordance with applicable laws.

All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.

Confirm your E-mail: Send Email