Skip to content
mimi

Senior Data Engineer

Soni

Hazlet · On-site Full-time Senior 2w ago

About the role

Our client is seeking a highly skilled Senior Data Engineer who can balance deep technical execution with growing leadership responsibilities. This individual will drive high-impact data initiatives end-to-end, serve as the primary technical expert on the team, and gradually take on leadership of a small team of two engineers—primarily offshore resources. The role is ideal for someone who thrives in an 80% hands-on environment while also guiding and mentoring others as the practice grows.

You will partner directly with business leaders and cross-functional teams to understand requirements, translate them into scalable technical solutions, and design pipelines and workflows built on modern data platforms. This position requires strong expertise in SQL, PySpark, and Databricks, including job scheduling and orchestration using tools such as Control-M or similar.

Key Responsibilities

Technical Leadership & Ownership (20%) • Provide technical guidance and mentorship to offshore engineers, ensuring quality, consistency, and adherence to best practices. • Act as the lead engineer on critical projects, setting standards for code quality, architecture, and delivery. • Support planning and prioritization for a lean engineering team, with an opportunity to formally grow into managing a small team.

Hands-On Engineering (80%) • Design, develop, and maintain scalable data pipelines using Databricks, SQL, and PySpark. • Build and optimize ETL/ELT workflows for ingestion, transformation, and processing of large datasets. • Manage Databricks jobs, including scheduling, automation, and orchestration using Control-M or a similar scheduling platform. • Develop high-quality, production-ready solutions that support analytics, reporting, and operational data needs. • Diagnose and remedy pipeline issues, performance bottlenecks, and data quality challenges.

Collaboration & Business Engagement • Work directly with business stakeholders to gather requirements, understand use cases, and translate needs into robust technical designs. • Partner with cross-functional teams including product, analytics, and architecture groups to deliver integrated, scalable solutions. • Communicate technical concepts clearly to both technical and non-technical audiences.

Architecture & Optimization • Support data modeling, schema design, and performance tuning across cloud and on-prem data systems. • Implement data management best practices—governance, observability, documentation, and operational standards. • Continuously assess and improve pipelines, architecture, and tooling to enhance reliability and speed.

Qualifications • 15+ years of IT experience with 8+ years in Data Engineering or related fields. • Deep expertise with SQL, PySpark, and Databricks, including job orchestration and scheduling. • Proven experience building and optimizing large-scale ETL/ELT pipelines. • Strong understanding of cloud data ecosystems (AWS preferred) and data warehousing platforms such as Redshift or Snowflake. • Experience working with SQL/NoSQL databases and modern data integration patterns. • Bonus: Experience with Fivetran or similar ingestion tools. • Familiarity with Hadoop ecosystem tools and ETL platforms is a plus. • Excellent communication skills and demonstrated ability to interact with business stakeholders. • Prior experience in insurance or regulated industries is advantageous.

Who You Are • A hands-on technical lead who enjoys building and delivering high-quality data solutions. • Someone who can work independently, drive initiatives, and take ownership from concept to deployment. • A natural mentor who supports and guides offshore resources while still owning the most complex engineering tasks. • An effective communicator who can gather business requirements and translate them into strong technical plans. • A problem-solver who thrives in evolving environments and stays current with modern data engineering practices

Compensation

Up to $150,000 annually + bonus

Compensation is based on a range of factors that include relevant experience, knowledge, skills, other job-related qualifications.

Requirements

  • Hands-On Engineering (80%)
  • 15+ years of IT experience with 8+ years in Data Engineering or related fields
  • Deep expertise with SQL, PySpark, and Databricks, including job orchestration and scheduling
  • Proven experience building and optimizing large-scale ETL/ELT pipelines
  • Experience working with SQL/NoSQL databases and modern data integration patterns
  • Bonus: Experience with Fivetran or similar ingestion tools
  • Excellent communication skills and demonstrated ability to interact with business stakeholders
  • Prior experience in insurance or regulated industries is advantageous
  • A hands-on technical lead who enjoys building and delivering high-quality data solutions
  • Someone who can work independently, drive initiatives, and take ownership from concept to deployment
  • A natural mentor who supports and guides offshore resources while still owning the most complex engineering tasks
  • An effective communicator who can gather business requirements and translate them into strong technical plans
  • A problem-solver who thrives in evolving environments and stays current with modern data engineering practices

Responsibilities

  • Our client is seeking a highly skilled Senior Data Engineer who can balance deep technical execution with growing leadership responsibilities
  • This individual will drive high-impact data initiatives end-to-end, serve as the primary technical expert on the team, and gradually take on leadership of a small team of two engineers—primarily offshore resources
  • The role is ideal for someone who thrives in an 80% hands-on environment while also guiding and mentoring others as the practice grows
  • You will partner directly with business leaders and cross-functional teams to understand requirements, translate them into scalable technical solutions, and design pipelines and workflows built on modern data platforms
  • This position requires strong expertise in SQL, PySpark, and Databricks, including job scheduling and orchestration using tools such as Control-M or similar
  • Technical Leadership & Ownership (20%)
  • Provide technical guidance and mentorship to offshore engineers, ensuring quality, consistency, and adherence to best practices
  • Act as the lead engineer on critical projects, setting standards for code quality, architecture, and delivery
  • Support planning and prioritization for a lean engineering team, with an opportunity to formally grow into managing a small team
  • Design, develop, and maintain scalable data pipelines using Databricks, SQL, and PySpark
  • Build and optimize ETL/ELT workflows for ingestion, transformation, and processing of large datasets
  • Manage Databricks jobs, including scheduling, automation, and orchestration using Control-M or a similar scheduling platform
  • Develop high-quality, production-ready solutions that support analytics, reporting, and operational data needs
  • Diagnose and remedy pipeline issues, performance bottlenecks, and data quality challenges
  • Collaboration & Business Engagement
  • Work directly with business stakeholders to gather requirements, understand use cases, and translate needs into robust technical designs
  • Partner with cross-functional teams including product, analytics, and architecture groups to deliver integrated, scalable solutions
  • Communicate technical concepts clearly to both technical and non-technical audiences
  • Support data modeling, schema design, and performance tuning across cloud and on-prem data systems
  • Implement data management best practices—governance, observability, documentation, and operational standards
  • Continuously assess and improve pipelines, architecture, and tooling to enhance reliability and speed

Benefits

Up to $150,000 annually + bonusCompensation is based on a range of factors that include relevant experience, knowledge, skills, other job-related qualifications

Skills

SQLPySparkDatabricksControl-METL/ELT workflowsData modelingSchema designPerformance tuningCloud data ecosystemsData warehousing platformsSQL/NoSQL databasesModern data integration patterns

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free