Skip to content
mimi

Data Engineer /ETL Lead Sr Technical Lead-Data Engg

Birlasoft Limited

Pune · On-site Full-time Senior Today

About the role

Area(s) of responsibility

Job Title

Lead Data Engineer / ETL Lead

Key Responsibilities

  • Lead the design, development, and optimization of end-to-end data pipelines across batch and real-time processing
  • Architect and implement enterprise-grade ETL solutions using Informatica, Ab Initio, and cloud-native services
  • Drive large-scale data migration and conversion initiatives, including mock runs, reconciliation, validation, and production cutovers
  • Design and manage cloud-based data platforms, leveraging Snowflake and AWS analytics services
  • Build and optimize PySpark-based data processing frameworks for high-volume datasets
  • Implement real-time data ingestion and transformation pipelines using Kafka and streaming technologies
  • Own performance tuning, scalability, cost optimization, and SLA adherence for data workloads
  • Collaborate closely with business, functional, and architecture teams to translate requirements into robust technical solutions
  • Lead and mentor development teams, conducting code reviews and enforcing engineering best practices
  • Oversee CI/CD, scheduling, monitoring, and operational stability of data pipelines
  • Support Agile delivery by participating in planning, backlog grooming, execution, and retrospectives.

Responsibilities

  • Lead the design, development, and optimization of end-to-end data pipelines across batch and real-time processing
  • Architect and implement enterprise-grade ETL solutions using Informatica, Ab Initio, and cloud-native services
  • Drive large-scale data migration and conversion initiatives, including mock runs, reconciliation, validation, and production cutovers
  • Design and manage cloud-based data platforms, leveraging Snowflake and AWS analytics services
  • Build and optimize PySpark-based data processing frameworks for high-volume datasets
  • Implement real-time data ingestion and transformation pipelines using Kafka and streaming technologies
  • Own performance tuning, scalability, cost optimization, and SLA adherence for data workloads
  • Collaborate closely with business, functional, and architecture teams to translate requirements into robust technical solutions
  • Lead and mentor development teams, conducting code reviews and enforcing engineering best practices
  • Oversee CI/CD, scheduling, monitoring, and operational stability of data pipelines
  • Support Agile delivery by participating in planning, backlog grooming, execution, and retrospectives.

Skills

Ab InitioAWSCI/CDCloud NativeData EngineeringETLInformaticaKafkaPySparkSnowflake

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free