Detailed Job Description:
7 years of experience with the Hadoop ecosystem and Big Data technologies. Hands on experience with the Hadoop ecosystem HDFS, MapReduce, HBase, Hive, Scala, Spark, Kafka, Presto. Experience in Scala is a must. Experience with building stream processing systems using solutions such as spark streaming. Experience in other open sources like Druid, Elastic Search, Logstash, CICD and cloud based deployments is a plus. Ability to dynamically adapt to conventional bigdata frameworks and tools with the use cases require
Minimum years of experience: 8 - 10 years
Certifications Needed: No
Top 3 responsibilities you would expect the Subcon to shoulder and execute:
Ensuring Data Delivery for EAP 2.0 Services for streaming, daily, weekly, monthly and quarterly loads, Support for jobs and data freshness monitoring Prod, DR and ITG env services Analyze the incident root cause, fix the data issues, no code fixes to ensure data delivery, Housekeeping, Code Fix for Incidents Non ER Problem Records, Class issues reduce the repetitive errors, EAP logic issues, Work on Continuous Service Improvement areas, Performance tuningInterview Process (Is face to face required?) No
Does this position require Visa independent candidates only? No