Share this Job

Go Back

Big Data Solution Architect Loc: Austin, TX Dur :- 6 M+ Rate:- Open


2017-07-06 17:39:14
Job Type: Contract

Dur :- 6 M+

Rate:- Open

Job Description

Position : Senior Big Data Solution Architect

Location : Austin, Texas

Job Responsibilities:

  • Document and understand business requirements, environment dependencies and integration points
  • Develop end to end architecture design on big data solution based on variety of business use cases
  • Lead end-to-end Hadoop implementation at large enterprise environment integrating with multiple legacy applications in heterogeneous technologies (Microsoft, Java, PowerBuilder, Oracle, SQL Server, Mainframe, GIS (Point Cloud), Sensors etc)
  • Will implement Hadoop ecosystem that will enable big data storage repository, data warehouse and data mart capabilities and business intelligence (BI) plus big data analytics.
  • Present and persuade the design architecture to the various stakeholders (Customer, Server, Network, Security and other teams )
  • Provide technical leadership and governance of the big data team and the implementation of the solution architecture in following Hadoop ecosystem (Hadoop (HortonWorks), Map Reduce, Pig, Hive, HC Catelog, Tez, Spark, Pheonix, Presto, Hbase, Accumulo, Storm, Kafka, Flume, Falcon, Atlas, OoZie, Ambari, Hue, Security â?? Kerberos, Ranger, Knox, Oracle ASO, HDFS encryption, AD/LDAP, hosting platform - AWS)
  • Manage the architecture design changes due to the business requirement and other interface integration changes
  • Provide an overall architect responsibilities including roadmaps, leadership, planning, technical innovation, security, IT governance, etc
  • Design, Layout, and Deploy Hadoop clusters in the cloud using Hadoop ecosystem & open Source platforms
  • Configure and tune production and development Hadoop environments with the various intermixing Hadoop components
  • End-to-end system implementation including data security and privacy concerns
  • Design and implement geospatial big data ingestion, processing and delivery
  • Provide cloud-computing infrastructure solutions on Amazon Web Services (AWS - EC2, VPCs, S3, IAM)

Basic Qualifications:

  • Min. 5+ years of experience on BIG Data Architecture, BIG Data ( BD)-Apache Hadoop (HDFS)- Hbase/Hive/Pig/Mahoot/Flume/Scoop/MapReduce/Yarn
  • Min. 3-5 years of experience on Hortonworks Hadoop - hdInsight / HDFS/ Hbase/ flume/ Scoop/ MapReduce/ Yarn
  • Min. 1-3 years of experience on Spark/Shark/Milb


Key Skills: