Share this Job

Go Back

Senior Big Data Engineer- (Hadoop, Spark, Python) LOC : Stow, MA


2017-03-03 11:22:07
Job Type: Full Time only
Budget $: 100,000 - 200,000

  • We are looking for a Senior Big Data Engineer to join our Big Data Analytics Platform team. In this role, you will be responsible for designing, developing, deploying and supporting the data ingestion pipeline from our Connected Products.
  • In this role, you will be partnering with our data science community by supplying highly per formant data sets for advanced analytics and will provide leadership and experience in the data engineering space.
  • The candidate must be results driven, customer focused, technologically savvy, and skilled at working in an agile development environment.

Job Responsibilities:

  • Design and develop the data ingestion pipelines into the Big Data & Analytics Platform for a variety of big data use cases
  • Deploy and support highly optimized solutions with a focus on automation
  • Participate in the end to end delivery of business use-cases including data architecture to deliver results
  • Deliver as part of an agile scrum team on the highest business value use cases that will drive our business strategy
  • Partner with IT architects to define analytic architecture that best leverages the Big Data Platform to enable advanced analytics capabilities
  • Provide leadership to the ongoing maturity of the development process, coach/mentor the development team on best practices and methodologies for enhanced solution development.
  • Stay up to date on the relevant technologies, plug into user groups, understand trends and opportunities

Required Demonstrable Skills:

  • Deep expertise is working with data all kinds, clean, dirty, unstructured, semi-structured
  • Have strong expertise Apache Spark (batch and Spark streaming)
  • Experience in real-time and batch data processing and associated technologies
  • Able to demonstrate strong skills in programming/scripting languages such as Python, Scala, Java
  • Ability do design, develop and deploy end to end data pipelines that meet business requirements and use cases
  • Experience with all Cloudera components with focus on Impala, Hive, Hbase, Scoop and Hue
  • Experience with automation technologies such as Oozie
  • Strong experience with large cloud-compute infrastructure solutions such as Amazon Web Services, Google, Azure
  • Knowledge of UNIX/Linux
  • Experience with Text Analytics
  • Strong knowledge of SQL
  • Experience in Hadoop Platform Security and Hadoop Data Governance topics
  • Experience in triaging production issues to understand and resolve the issue
  • Experience in technical computing (optimization, statistics and machine learning)
  • Experience with analytics visualization software such as Tableau
  • Experience leading development teams, defining development processes, evaluating new technologies and practices to enhance solution delivery

Qualifications:

  • Minimum of BS in Computer Science or similar field required
  • Must have at least 5-8 years?? experience in information management and application development
  • Must have a minimum of 3-5 years working hands on with Big Data technologies


Key Skills: