Share this Job

Go Back

AWS Site Reliability Engineer - Remote

COLUMBUS, OH
2023-01-18 08:56:56
Job Type: Full Time only
Budget $: 100K - 115K

Summary:

The IS Technical Specialist provides technical and consultative support on the most complex technical matters.

Duties and Responsibilities:

  • Migrate data located in a multitude of data stores, into the Data Lake
  • Orchestrate processes to ETL that data, slice it into the various data marts
  • Manage access to the data through Lake Formation
  • Build a data delivery pipeline to ingest a high volume of the real-time streams, detect anomalies, slice into the window analytics, and put those results in the Elastic search system for the further dashboard consumption
  • Analyze, scope, and estimate tasks, identify technology stack and tools
  • Design and implement optimal architecture and migration plan
  • Develop new and re-architecture solution modules, re-design and re-factor program code
  • Specify the infrastructure and assist DevOps engineers with provisioning
  • Examine performance and advise necessary infrastructure changes
  • Communicate with the client on project-related issues
  • Collaborate with in-house and external development and analytical team
  • Analyzes, designs, and develops systems based on user specifications.
  • Provides technical assistance in solving hardware or software problems.
  • Possesses an in-depth knowledge of and works with the technical tools available for systems development and support.
  • Maintains and demonstrates knowledge of technology industry trends, particularly as they apply to Huntington.
  • May assist with identifying training needs or with the training of less experienced staff.
  • May serve as project leader for specific projects.
  • Performs other duties as assigned.

Please Note: This position is available for remote work, however, the expectation is to be available to work during Eastern or Central time zone hours.

Basic Qualifications:

  • Bachelor's Degree
  • 5 years of experience with AWS operations

Preferred Qualifications:

  • Hands-on experience designing efficient architectures for high-load enterprise-scale applications or ‘big data’ pipelines
  • Hands-on experience utilizing AWS data toolsets including but not limited to DMS, Glue, Data Brew, EMR, SCT
  • Practical experience in implementing big data architecture and pipelines
  • Hands-on experience with message queuing, stream processing, and highly scalable ‘big data’ stores
  • Advanced knowledge and experience working with SQL and NoSQL databases
  • Proven experience in re-design and re-architecting the large complex business applications
  • Strong self-management and self-organizational skills
  • Successful candidates should have experience with any of the following software/tools (not all required at the same time):
    • Python and PySpark - strong knowledge especially with developing Glue jobs
    • Big data tools: Kafka, Spark, Hadoop (HDFS3, YARN2, Tez, Hive, HBase)
    • Stream-processing systems: Kinesis Streaming, Spark-Streaming, Kafka Streams, Kinesis Analytics
    • AWS cloud services: EMR, RDS, MSK, Redshift, DocumentDB, Lambda
    • Message queue systems: ActiveMQ, RabbitMQ, AWS SQS
    • Federated identity services (SSO): Okta, AWS Cognito
  • Experience working in multi-platform environment
  • Ability to balance both development and support roles
  • Experience in working on projects that involve business segments
  • We are looking for a candidate with 3+ years of experience in Data, Cloud, or Software Engineer role, who has attained a degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field
  • Usage of HUDI with AWS Data Lakes
  • Graph databases development and optimization 3+ years
  • Neo4j, GREMLIN, Amazon Neptune, Knowledge Graphs
  • Valid AWS certificates would be a great plus
  • Strong interpersonal skills, focus on customer service, and the ability to work well with other IT, vendor, and business groups


Key Skills: