Bangalore, Karnataka, India
23 days ago
Hadoop Admin
HARMAN’s engineers and designers are creative, purposeful and agile. As part of this team, you’ll combine your technical expertise with innovative ideas to help drive cutting-edge solutions in the car, enterprise and connected ecosystem. Every day, you will push the boundaries of creative design, and HARMAN is committed to providing you with the opportunities, innovative technologies and resources to build a successful career.

A Career at HARMAN

As a technology leader that is rapidly on the move, HARMAN is filled with people who are focused on making life better. Innovation, inclusivity and teamwork are a part of our DNA. When you add that to the challenges we take on and solve together, you’ll discover that at HARMAN you can grow, make a difference and be proud of the work you do everyday.

Hadoop Admin

Hadoop System support.

OS loading & configuration of name nodes, data nodes and edge nodes in Hadoop clusterConfigure NFS shares on Data Lake storage nodes & mount across Hadoop cluster nodesInstalling and upgrading Java on Hadoop nodesInstalling Hadoop packagesPerformance management (CPU, Memory & disk performance issues)Kernel parameter tuningDisk management (configuring filesystem, extending filesystem)RAID configuration (redundancy for OS drives)Hardware issues (add and remove drives, memory)Network management (configure IP address, network bonding)Upgrade BIOS & network card driversVendor management (support for hardware & OS)Day to day Hadoop supportMonitoring Hadoop Services (HDFS, Yarn, Hive, Impala, Hue, Solr, Zookeeper, Kafka, Spark,Sqoop, Sentry etc). using Cloudera manager.Monitoring the jobs and informed to Onsite team to failed jobs as well log information.Identify the failed jobs and gathering log data and fixed the issues.Handling the tickets with Hadoop related issues to resolve the with in SLA and priority wise.Responsibility’s:Hadoop cluster setup, installation, configuration of multi-node cluster using Clouderadistributions of Hadoop framework services.Cluster level security implementation using Kerberos authentication and ACL’s as well.Hive databases, tables, column level security implementation using Sentry authorization.

Hadoop cluster upgradationCommission/decommission the nodes into Hadoop cluster.Implementing High availability with Name node and Resource manager.Integrate AD with Hadoop cluster.Hadoop cluster troubleshooting and improve the cluster performance.Prepared technical documentation.• Helping with our support team members to create the schemas, partitions in hive and impala    and remove/add the files into HDFS directory.Migrating the hive databases & tables and HDFS files from HDPPDEV cluster to HDPPRD2cluster.Should take care of issues like below.Integrating AD with SVN.Data content must be given by the CLIENT.Third-party dependency their response/ resolution etc.Defects due to hardware failures.Wrong data input provide already by customer.Working experience in supporting customer.Production system administration area.Sound Knowledge in Linux Operating system, services & utilities like NFS/AutoFS, Samba,and NTP etc.Able to handle most of basic to advanced calls in User, group management, system start-up,service modification, crontab entries, ACL modifications, file system management, root & OSExperience in Server Management, with the help of the tool provided.Capable of installing middleware like backup client software, Apache and web applications etc.Working Knowledge of Linux cluster is mandatory, SUN Solaris.Working knowledge of Raid’s is mandatory (Like RAID0, RAID1, and RAID5).Should have working knowledge in performance tuning.Interpersonal sensitivity and customer responsiveness with good spoken and writtenCommunication skills, team working skills & require shift operations.Should be a good team player.Knowledge on UNIX shells scripting.Ability to communicate technical contents and vendor management.Able to co-ordinate installation & Configuration of Linux Operating system with Hardware vendors.Good process knowledge in call login, tracking, closure procedures. Knowledge of ITSM

Processes.

Adhere to SLA response and resolution times and ensure cases are regularly updated.Escalation of problems in a timely manner as per the defined escalation matrix.Experience on Server Performance Analysis & Capacity ManagementExperience in at least one of enterprise backup management solutionsCertification can be compromised provided candidate is having good technical knowledge.Experience on cloud technologies like AWS.Experience on NAS like ZFS appliance.Hadoop system and hardware configuration knowledge.Knowledge on Setting up and configure nagios server.

HARMAN is proud to be an Equal Opportunity / Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.

Confirm your E-mail: Send Email