Principal Software Developer
Oracle
We are seeking a highly skilled Principal Software Engineer with extensive experience in building large-scale streaming data processing frameworks. In this role, you will design, develop, and optimize high-performance data pipelines that handle massive volumes of real-time data. You will play a critical role in ensuring our systems are scalable, fault-tolerant, and capable of delivering insights at lightning speed.
What you will do:
Lead the design, architecture, and development of robust streaming data processing frameworks on OCI Design and develop highly efficient data pipelines that ingest, transform, and enrich real-time data streams. Collaborate with cross-functional teams to understand data processing requirements and translate them into scalable solutions. Optimize and fine-tune the performance of data processing jobs to handle large volumes of data efficiently. Mentor and guide junior engineers, providing technical leadership and fostering a culture of excellence and innovation within the team. Conduct in-depth code reviews to ensure code quality, best practices, and adherence to software development standards. Stay up-to-date with the latest trends and advancements in distributed systems and big data technologies, and evaluate their applicability to our systems. Identify and resolve complex technical challenges related to distributed computing, data partitioning, and fault tolerance. Collaborate with infrastructure teams to ensure the smooth deployment and operation of our data processing infrastructure Continuously monitor system performance and identify areas for optimization and improvement. Participate in capacity planning and scalability efforts to handle increasing data volumes. Required Qualifications:
Proven track record of 10+ years in software engineering, with a focus on building large scale real time data processing framework. Experience with big data technology such as Apache Kafka, Flink, Pulsar, Spark Streaming, Hadoop, Hive, Iceberg, Presto, etc. Proficiency in programming languages like Java, Scala, or Python, and the ability to write high-performance, efficient code. Experience with cloud infrastructure, such as OCI, AWS, Azure or GCP, and familiar with the concepts of Compute, Storage, Identity, Networking, Databases etc. Solid understanding of distributed computing concepts Experience with containerization technologies (e.g., Docker) and container orchestration (e.g., Kubernetes) is a plus. Excellent problem-solving skills and the ability to tackle complex technical challenges with innovative solutions. Strong communication skills, both verbal and written, and the ability to work effectively in a collaborative team environment. Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
What you will do:
Lead the design, architecture, and development of robust streaming data processing frameworks on OCI Design and develop highly efficient data pipelines that ingest, transform, and enrich real-time data streams. Collaborate with cross-functional teams to understand data processing requirements and translate them into scalable solutions. Optimize and fine-tune the performance of data processing jobs to handle large volumes of data efficiently. Mentor and guide junior engineers, providing technical leadership and fostering a culture of excellence and innovation within the team. Conduct in-depth code reviews to ensure code quality, best practices, and adherence to software development standards. Stay up-to-date with the latest trends and advancements in distributed systems and big data technologies, and evaluate their applicability to our systems. Identify and resolve complex technical challenges related to distributed computing, data partitioning, and fault tolerance. Collaborate with infrastructure teams to ensure the smooth deployment and operation of our data processing infrastructure Continuously monitor system performance and identify areas for optimization and improvement. Participate in capacity planning and scalability efforts to handle increasing data volumes. Required Qualifications:
Proven track record of 10+ years in software engineering, with a focus on building large scale real time data processing framework. Experience with big data technology such as Apache Kafka, Flink, Pulsar, Spark Streaming, Hadoop, Hive, Iceberg, Presto, etc. Proficiency in programming languages like Java, Scala, or Python, and the ability to write high-performance, efficient code. Experience with cloud infrastructure, such as OCI, AWS, Azure or GCP, and familiar with the concepts of Compute, Storage, Identity, Networking, Databases etc. Solid understanding of distributed computing concepts Experience with containerization technologies (e.g., Docker) and container orchestration (e.g., Kubernetes) is a plus. Excellent problem-solving skills and the ability to tackle complex technical challenges with innovative solutions. Strong communication skills, both verbal and written, and the ability to work effectively in a collaborative team environment. Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
Career Level - IC4
Confirm your E-mail: Send Email
All Jobs from Oracle