Data Engineer: III (Senior)
Atlantic Partners
About the role
About
Our client is seeking a Senior Data Engineer with deep expertise in building scalable, cloud‑native data platforms on AWS. This is a hands‑on engineering role focused on designing and implementing modern lakehouse architectures using some AWS managed services, open table formats (Iceberg) and compute running in our EKS/ArgoWF environments.
Team Culture / Work Environment
- 4‑5 data teams
- They are all running through SAFe
- Sprints/deliverables
- Highly collaborative
- Fast pace
- Most of the team is hybrid
- Culture is ownership of the work and taking initiative
Daily Responsibilities
Advanced Python Engineering Skills
- Strong proficiency in Python for data engineering tasks.
- Experience with modular, testable code and production‑grade pipelines.
- Not looking for SQL‑heavy DBAs or analysts; this is a software engineering role.
AWS Lakehouse Architecture Expertise
- Proven experience designing and implementing lakehouse architectures on AWS.
- Familiarity with key AWS services: S3, Glue, Athena, Glue Data Catalog, Lake Formation, QuickSight, CloudWatch, etc.
- Experience with AWS QuickSight (Preferred), Tableau or Cognos.
- ETL pipeline development.
- Bonus: Experience with EKS‑based orchestration using EMR on EKS or Argo Workflows.
Open Table Formats
- Deep understanding of Apache Iceberg (preferred), Delta Lake, or Apache Hudi.
- Experience implementing time‑travel, schema evolution, and partitioning strategies.
Medallion Architecture Implementation
- Experience designing and implementing Bronze, Silver, Gold data layers.
- Understanding of ingestion, transformation, and curation best practices.
Strong understanding of core data modeling concepts, including slowly changing dimensions (SCD Type 2) and their implementation in modern data platforms.
Hands‑on experience with distributed computing frameworks such as Apache Spark or similar technologies.
Experience with CI/CD tools and practices for building, testing, and deploying data pipelines.
Strong communication and documentation skills.
Ability to work independently and collaborate with cross‑functional teams including tech leads, architects, and product managers.
Degree or Certification Required?
- None
Years of Experience?
- 4‑5 years within data engineering / AWS
Nice to Haves
- Experience with DataOps practices and CI/CD for data pipelines.
- Familiarity with Terraform or CloudFormation for infrastructure‑as‑code.
- Exposure to data quality frameworks like Deequ or Great Expectations.
- Undergraduate degree.
- Iceberg on AWS
Requirements
- Strong proficiency in Python for data engineering tasks.
- Experience with modular, testable code and production-grade pipelines.
- Proven experience designing and implementing lakehouse architectures on AWS.
- Familiarity with key AWS services: S3, Glue, Athena, Glue Data Catalog, Lake Formation, Quicksights, CloudWatch, etc.
- Deep understanding of Apache Iceberg (preferred), Delta Lake, or Apache Hudi.
- Experience implementing time-travel, schema evolution, and partitioning strategies.
- Experience designing and implementing Bronze Silver Gold data layers.
- Understanding of ingestion, transformation, and curation best practices.
- Hands on experience with distributed computing frameworks such as apache spark or similar technologies
- Experience with CI/CD tools and practices for building testing and deploying data pipelines
- Strong communication and documentation skills.
- Ability to work independently and collaborate with cross-functional teams including tech leads, architects, and product managers.
Responsibilities
- Advanced Python Engineering Skills
- AWS Lakehouse Architecture Expertise
- ETL Pipeline Development
- Open Table Formats
- Medallion Architecture Implementation
- Strong understanding of core data modeling concepts, including slowly changing dimensions (SCD Type 2) and their implementation in modern data platforms
- Hands on experience with distributed computing frameworks such as apache spark or similar technologies
- Experience with CI/CD tools and practices for building testing and deploying data pipelines
- Strong communication and documentation skills.
- Ability to work independently and collaborate with cross-functional teams including tech leads, architects, and product managers.
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free