Skip to content
mimi

Data Engineer- Scala || Spark || AWS - EXL Services Full Time

Switch-Now Consulting

New Delhi · On-site Full-time Today

About the role

Job Description:

We are looking for a skilled Data Engineer with strong expertise in AWS, Scala, and Apache Spark to design, build, and optimize scalable data pipelines and data processing systems.

Key Responsibilities: • Design and develop scalable data pipelines using Scala and Apache Spark • Work with AWS services like S3, Glue, Lambda, EMR, Redshift, etc. • Build and maintain ETL/ELT workflows for large-scale data processing • Ensure data quality, integrity, and performance optimization • Collaborate with cross-functional teams including Data Analysts and Data Scientists • Implement best practices for data governance and security

Required Skills: • Strong experience in Scala and Apache Spark • Hands-on experience with AWS cloud platform • Good understanding of distributed data processing and big data architecture • Experience with ETL tools and data warehousing concepts • Knowledge of SQL and data modeling • Familiarity with CI/CD pipelines is a plus

Eligibility Criteria: • 5–10 years of relevant experience • Must have: AWS, Scala, Spark

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free