Skip to content
mimi

Lead Cloud Data Engineer

Incedo Inc.

New York · Hybrid Full-time Lead Today

About the role

About

We are looking for a highly experienced Data Engineer – AWS Tech Lead to design and lead scalable cloud‑based data solutions. This role will combine hands‑on engineering with leadership responsibilities, guiding a team of data engineers while collaborating with architects, product owners, and business stakeholders. The ideal candidate will have deep expertise in AWS data services, ETL/ELT pipelines, and big data frameworks, with proven experience in leading technical delivery.

Responsibilities

  • Lead the design, development, and deployment of large‑scale data pipelines and solutions on AWS
  • Provide technical leadership and mentorship to a team of data engineers, ensuring best practices in coding, architecture, and performance optimization
  • Architect data lake and data warehouse solutions leveraging AWS Glue, Redshift, S3, Athena, EMR, and Lambda
  • Drive development of ETL/ELT processes using PySpark, Python, and SQL
  • Implement and enforce data governance, quality checks, and security standards across the platform
  • Collaborate with business analysts, data scientists, and product teams to translate business requirements into scalable solutions
  • Optimize cost and performance of AWS‑based data workloads
  • Troubleshoot production issues and ensure high availability of data pipelines
  • Participate in sprint planning, backlog refinement, and Agile ceremonies as a technical lead

Required Qualifications

  • 13+ years of experience in Data Engineering, with at least 4+ years in a technical leadership role
  • Strong expertise in AWS data services: Glue, S3, Redshift, Athena, Lambda, Step Functions, EMR
  • Proficiency in PySpark, Python, and SQL for large‑scale data processing
  • Solid experience in data modeling (star, snowflake, dimensional modeling) and data lake/warehouse architecture
  • Proven track record in leading and mentoring a data engineering team
  • Strong understanding of Agile methodologies and CI/CD practices for data pipelines

Preferred Qualifications

  • Experience in BFSI / Wealth Management domain
  • AWS Certification (Data Analytics Specialty, Solutions Architect, or Big Data)
  • Familiarity with containerization (Docker, Kubernetes) and orchestration frameworks

Education

  • Bachelor’s degree in Computer Science, Information Technology, Engineering, or related field
  • Master’s degree preferred

Requirements

  • yWe are looking for a highly experienced Data Engineer – AWS Tech Lead to design and lead scalable cloud-based data solutions
  • The ideal candidate will have deep expertise in AWS data services, ETL/ELT pipelines, and big data frameworks, with proven experience in leading technical delivery
  • Required Qualificatio
  • ns13+ years of experience in Data Engineering, with at least 4+ years in a technical leadership rol
  • e.Strong expertise in AWS data services: Glue, S3, Redshift, Athena, Lambda, Step Functions, EM
  • R.Proficiency in PySpark, Python, and SQL for large-scale data processin
  • g.Solid experience in data modeling (star, snowflake, dimensional modeling) and data lake/warehouse architecture
  • s.Proven track record in leading and mentoring data engineering team
  • s.Strong understanding of Agile methodologies and CI/CD practices for data pipeline
  • AWS Certification (Data Analytics Specialty, Solutions Architect, or Big Dat
  • a).Familiarity with containerization (Docker, Kubernetes) and orchestration framewor
  • tionBachelor’s degree in Computer Science, Information Technology, Engineering, or related fi
  • eld.Master’s degree prefer

Responsibilities

  • This role will combine hands-on engineering with leadership responsibilities, guiding a team of data engineers while collaborating with architects, product owners, and business stakeholders
  • Lead the design, development, and deployment of large-scale data pipelines and solutions on AW
  • S.Provide technical leadership and mentorship to a team of data engineers, ensuring best practices in coding, architecture, and performance optimizatio
  • n.Architect data lake and data warehouse solutions leveraging AWS Glue, Redshift, S3, Athena, EMR, and Lambd
  • a.Drive development of ETL/ELT processes using PySpark, Python, and SQ
  • L.Implement and enforce data governance, quality checks, and security standards across the platfor
  • m.Collaborate with business analysts, data scientists, and product teams to translate business requirements into scalable solution
  • s.Optimize cost and performance of AWS-based data workload
  • s.Troubleshoot production issues and ensure high availability of data pipeline
  • s.Participate in sprint planning, backlog refinement, and Agile ceremonies as a technical lea

Skills

AWSAWS AthenaAWS CloudFormationAWS EMRAWS GlueAWS LambdaAWS RedshiftAWS S3AWS Step FunctionsCI/CDDockerKubernetesPythonPySparkSQL

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free