Hyderabad, Telangana, India
17 hours ago
Software Engineer II-PySpark Developer

You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you.

As a PySpark Developer Software Engineer II at JPMorgan Chase within the Commercial & Investment Bank Payments Technology team, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.

Job responsibilities

 

Executes standard software solutions, design, development, and technical troubleshootingWrites secure and high-quality code using the syntax of at least one programming language with limited guidanceDesigns, develops, codes, and troubleshoots with consideration of upstream and downstream systems and technical implicationsApplies knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automationApplies technical troubleshooting to break down solutions and solve technical problems of basic complexityGathers, analyzes, and draws conclusions from large, diverse data sets to identify problems and contribute to decision-making in service of secure, stable application developmentLearns and applies system processes, methodologies, and skills for the development of secure, stable code and systemsAdds to team culture of diversity, equity, inclusion, and respect

 

Required qualifications, capabilities, and skills

 

Formal training or certification on software engineering concepts and 2+ years applied experienceHands-on practical experience in system design, application development, testing, and operational stabilityProven experience as a Data Engineer (atleast 3 years), with a focus on PySpark and big data technologies.Proficiency in Python and PySpark for data processing and analysis.Experience with Big data technologies on Cloud: AWS or other Cloud services.Strong understanding of SQL and experience with relational databases.Knowledge of data warehousing concepts and ETL processes.Monitor data pipelines for performance and reliability, and troubleshoot issues as they arise.Optimize PySpark jobs for performance and scalability, including tuning Spark configurations and resource allocationExperience across the whole Software Development Life Cycle

 

Preferred qualifications, capabilities, and skills

 

Familiarity with modern front-end technologiesExposure to cloud technologiesPyspark, Databricks, SQL, AWS Cloud, Glue, Databricks
Confirm your E-mail: Send Email