As a Technical Support Engineer, candidates will provide support to customers for diagnosing, reproducing, and fixing Hadoop related issues. You will troubleshoot the Hortonworks Data Platform in multiple types of environments and take ownership of problem isolation and resolution, and bug reporting. To be successful in this role, you must be a motivated self-starter, be committed to ongoing self-education, possess strong customer service skills and have excellent technical problem solving skills.
Essential Functions
Resolve customer problems via telephone, email or remote access.
Maintain customer loyalty through integrity and accountability.
Research customer issues in a timely manner and follow up directly with the customer with recommendations and action plans.
Escalate cases to management when customer satisfaction comes into question.
Escalate cases to the engineering team when the problem is beyond the scope of technical support or falls out of the support team’s expertise.
Maintain control and management of the overall resolution for any escalated case, even when cross-functional groups are involved.
Leverage internal technical expertise, including development engineers, knowledge base, and other internal tools to provide the most effective solutions to customer issues.
Create knowledge base content to capture new learning for re-use throughout the company and user base.
Participate in technical communications within the team to share best practices and learn about new technologies and other ecosystem applications.
Participate in the on-call rotation with other Technical Support Engineers.
Actively participate in the Hadoop community to assist with generic support issues.
Learn as much about Hadoop as you can!
Job Requirements
2-8 years of enterprise software support experience.
A strong and enthusiastic commitment to resolving customer problems in a high quality and timely manner.
Support/troubleshooting experience in one or more of the following areas:
o Networking, Hadoop core, HBase, HIVE.
o Linux and or Unix environments.
o Scripting at the command line level for Linux.
o Enterprise storage, databases or high-end server solutions.
o Virtualized environments such as; ESX, Xen, KVM, or
AWS and EC2.
o NAS and/or SAN.
Ability to compile and install Linux applications from source.
Distributed file system experience.
· Bachelor's degree in Computer Science or Engineering.
· Enthusiastic about Big Data and the Hadoop ecosystem.
· Good written and verbal communication skills with a strong aptitude for learning new technologies and understanding how to utilize them in a customer facing environment.
· Excellent interpersonal skills with the ability to maintain and be in control of customers under all circumstances.
· Grace under pressure - must be able to deal with difficult customer situations with professionalism.
· High energy, high integrity, modest demeanor with customers a must.