Remote, Guatemala
4 days ago
Data Engineer Guatemala

At EZCORP we are a growing team focused on creating and changing the pawn industry as we know it today. We believe that our platform enabled lending and e-commerce solutions will revolutionize our ability to attract, engage and service our customers across the United States, Mexico and Latin America.

Join us now for an opportunity to be a part of a team that wants to provide access to short-term cash for every person – everywhere!

The Company:

Founded in Austin in 1989, EZCORP has grown into a leading provider of pawn loans in the United States, Mexico and Latin America. We are dedicated to satisfying the short-term cash needs of consumers who are both cash and credit constrained and providing an industry-leading customer experience.

What’s in it for you:

Ground Floor opportunity with EZCORP, a company with a start-up, purpose-driven mentality where innovative and agile problem solving are part of our DNA along with competitive compensation and benefits.

Address:

Guatemala

JOB SUMMARY

The Data Engineer will focus on developing and optimizing data processing systems, implementing ETL processes, and ensuring the accuracy and consistency of data across various platforms. The role involves working with large datasets and using tools like Azure Data Factory, Azure Databricks, and Spark to create scalable data pipelines. The Data Engineer will support the Data Architect by implementing data models and contributing to data governance practices.

ESSENTIAL DUTIES & RESPONSIBILITIES:

Develop, maintain, and optimize data processing systems and ETL pipelines using tools such as Azure Data Factory and Azure Databricks.Collaborate with the Data Architect to implement data models and architectures that support scalable and reliable data flows.Ensure data quality and integrity through unit tests with Pytest and component and integration tests with Behave.Work with cross-functional teams to integrate data from various sources, utilizing PySpark processes executed in Databricks and orchestrated by Azure Data Factory.Manage infrastructure components in the Azure Cloud, including role-based access control and Infrastructure as Code (IaC) practices.Contribute to continuous integration and continuous deployment (CI/CD) processes using Azure DevOps and related tools.Perform data analysis and processing using SQL, Python, and Spark to support business intelligence and reporting needs.

EDUCATION & EXPERIENCE:

5+ years of experience in data engineering or a related field.Proficiency in SQL, Python, and Spark for data processing and analysis.Extensive experience with Azure Data Factory and Azure Databricks for ETL and data pipeline development.
Confirm your E-mail: Send Email