Skip to content
mimi

Senior Big Data Platform Engineer

The Hartford

Morristown · Hybrid Full-time Senior $136k – $204k/yr Today

About the role

About

Join our dynamic team at The Hartford, where we're committed to making a significant impact beyond traditional insurance roles. As a Senior Big Data Platform Engineer, you'll play a crucial part in shaping the future of our Cloud Big Data Platform Engineering & Operations team, fostering a customer-first mindset and ensuring our data platforms are stable, scalable, and reliable.

In this technical lead role, you will leverage your deep expertise in AWS Big Data/EMR infrastructure, Infrastructure as Code (IaC), automation, security, and observability to engineer and maintain cutting-edge solutions. Working closely with data engineers, you'll analyze requirements and challenges to recommend platform-driven solutions that maximize performance and efficiency.

We're seeking a passionate technologist who thrives in a fast-paced environment and is dedicated to creating resilient, future-ready data platforms. You will also mentor and guide fellow platform and reliability engineers, promoting a culture of technical excellence, innovation, and collaboration.

Responsibilities

  • Administer and optimize Big Data platforms across multiple Hadoop clusters in the cloud (AWS EMR), ensuring top-tier scalability and reliability.
  • Design, implement, and maintain multi-tenant Data Platforms using Infrastructure as Code (IaC) while aligning with The Hartford's engineering and security principles.
  • Drive operational excellence by proactively managing incidents and minimizing service restoration times, demonstrating end-to-end ownership.
  • Apply Site Reliability Engineering (SRE) principles to build robust tooling, alerts, and automated responses that mitigate reliability risks.
  • Shape the architecture of Platform, PaaS, and SaaS solutions, driving innovation and operational excellence.
  • Participate in an on-call rotation, providing technical expertise during service-impacting incidents to ensure quick diagnosis and effective resolution.
  • Evaluate and implement emerging data technologies focused on big data, analytics, data wrangling, and BI to enhance efficiency.
  • Act as a subject matter expert for data platforms, driving root cause analyses and ensuring reliability and performance.
  • Provide mentorship to junior and mid-level data engineers, nurturing skill development and promoting best practices.
  • Empower data engineers, scientists, and analysts by enabling self-service capabilities for data exploration and analysis.
  • Develop training materials and deliver training sessions to enhance user engagement with data solutions.
  • Support documentation and visualization of data assets to improve discoverability and self-service analytics.
  • Foster a collaborative culture by aligning team priorities and goals.

Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field; relevant experience may substitute for formal education.
  • Extensive expertise in big data technologies, data engineering, analytics, and platform operations.
  • Proficient in Hadoop, Linux, Python, SQL, Spark, security protocols, and performance tuning.
  • At least 7 years of diverse experience in IT, including platform administration and analytics.
  • Hands-on experience with AWS solutions and Infrastructure as Code (CloudFormation, Terraform) and CI/CD pipelines.
  • 3+ years providing architectural guidance and technical direction.
  • Experience in data engineering with a focus on ETL processes and analytics applications.
  • Skilled in change management and incident management processes.
  • Familiarity with implementing Reliability Engineering practices is a plus.
  • Able to learn new technologies swiftly and manage responsibilities effectively within the first year.
  • Strong analytical and problem-solving skills, with excellent communication abilities.

Preferred Skills

  • Advanced experience with AWS services such as EMR, EKS, and S3.
  • Experience using CloudFormation or Terraform for configuration management.
  • Advanced programming/scripting skills in Python or Bash.
  • Preferred certifications: AWS Solution Architect and Cloudera Admin.

Work Environment

This position offers a hybrid work schedule, with expectations to work in the office (Columbus, OH, Chicago, IL, Hartford, CT, or Charlotte, NC) three days a week (Tuesday through Thursday).

Sponsorship

Applicants must be authorized to work in the US without company sponsorship. The Hartford does not support the STEM OPT I-983 Training Plan endorsement for this position.

Compensation

The annual base pay range is $136,000 - $204,000, varying based on performance and competencies. This compensation is part of The Hartford's total rewards package, which may also include bonuses, long-term incentives, and recognition programs.

The Hartford is an Equal Opportunity Employer.

Requirements

  • Bachelor's degree in Computer Science, Engineering, or a related field; relevant experience may substitute for formal education.
  • Extensive expertise in big data technologies, data engineering, analytics, and platform operations.
  • Proficient in Hadoop, Linux, Python, SQL, Spark, security protocols, and performance tuning.
  • Hands-on experience with AWS solutions and Infrastructure as Code (CloudFormation, Terraform) and CI/CD pipelines.
  • 3+ years providing architectural guidance and technical direction.
  • Experience in data engineering with a focus on ETL processes and analytics applications.
  • Skilled in change management and incident management processes.
  • Familiarity with implementing Reliability Engineering practices is a plus.
  • Able to learn new technologies swiftly and manage responsibilities effectively within the first year.
  • Strong analytical and problem-solving skills, with excellent communication abilities.

Responsibilities

  • Administer and optimize Big Data platforms across multiple Hadoop clusters in the cloud (AWS EMR), ensuring top-tier scalability and reliability.
  • Design, implement, and maintain multi-tenant Data Platforms using Infrastructure as Code (IaC) while aligning with The Hartford's engineering and security principles.
  • Drive operational excellence by proactively managing incidents and minimizing service restoration times, demonstrating end-to-end ownership.
  • Apply Site Reliability Engineering (SRE) principles to build robust tooling, alerts, and automated responses that mitigate reliability risks.
  • Shape the architecture of Platform, PaaS, and SaaS solutions, driving innovation and operational excellence.
  • Participate in an on-call rotation, providing technical expertise during service-impacting incidents to ensure quick diagnosis and effective resolution.
  • Evaluate and implement emerging data technologies focused on big data, analytics, data wrangling, and BI to enhance efficiency.
  • Act as a subject matter expert for data platforms, driving root cause analyses and ensuring reliability and performance.
  • Provide mentorship to junior and mid-level data engineers, nurturing skill development and promoting best practices.
  • Empower data engineers, scientists, and analysts by enabling self-service capabilities for data exploration and analysis.
  • Develop training materials and deliver training sessions to enhance user engagement with data solutions.
  • Support documentation and visualization of data assets to improve discoverability and self-service analytics.
  • Foster a collaborative culture by aligning team priorities and goals.

Skills

AWSAWS CloudFormationAWS EMRAWS EKSAWS S3BashBig DataBICI/CDCloudera AdminData EngineeringETLHadoopInfrastructure as CodeLinuxPaaSPythonSaaSSecuritySite Reliability EngineeringSQLSparkTerraform

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free