Share this Job

Go Back

Hadoop Admin Location : Santa Clara, CA


2015-10-08 11:46:53
Job Type: Full Time only
Budget $: 100,000 - 200,000

Title : Hadoop Admin Location : Santa Clara, CA Duration : Long Term Position Description: Our client, the leading provider of online marketing software and services to the restaurant industry, is seeking an experienced Hadoop Infrastructure Administrator to join their outstanding software development team at Santa Clara, CA. The person will be responsible for implementation and ongoing maintenance of Hadoop Big Data Infrastructure. The candidate will work within a team and will interact daily with Engineers, Data Analysts, BI Engineers, and Services Team. The QA Engineer will need strong analytical and organizational skills and strictest attention to details. Successful candidate will adhere to established standards and provide input to develop new standards as needed. The Hadoop Administrator must be able to work with independently with minimum supervision, have strong communication skills, be self-driven, and an effective team player. Job Responsibilities: Responsible for implementation and ongoing administration of Hadoop infrastructure. Cluster maintenance including Administrating, monitoring, tuning and troubleshooting. Design, implement and maintain security, Data capacity and node forecasting and planning. Providing hardware architectural guidance, planning and estimating cluster capacity, and creating roadmaps for Hadoop cluster deployment. Closely working with the Engineering, infrastructure, network, database, and business intelligence teams to ensure availability. Candidate Profile: BS in Computer Science or related area 7-10 years of system administration, networking and virtualization experience. 3+ years of experience in Hadoop, NoSQL infrastructure administration preferably MapR distribution. Manage extracting, loading and transforming data in and out of Hadoop, primarily using Hive, Sqoop, and distcp. Proficiency with Shell scripts and administration tools like Ganglia. Experience with deployment tools like Puppet, Chef, and Ansible etc. Familiarity with YARN, HIVE, Pig, Sqoop, HBase, Spark, Kafka On-call Production support experience. Proficiency with agile or lean development practices Excellent technical and organizational skills Excellent written and verbal communication skills Work independently with minimal supervision Top 4 skill sets / technologies in the ideal candidate: 1. Cloud Infrastructure Managment, SAAS 2. Hadoop Infrastructure administration 3. Experience working in an Agile environment 4. Hadoop/HBase/Hive/Spark/Tableau/Kafka/ Technologies that we use include: * Java * Hadoop/MapReduce * Flume * Spark * Kafka * HBase * Drill * MemSQL * Pig * Hive * Talend * Tableau Integration * ETL
Key Skills: