Skip to content
mimi

AWS Data Engineer

Jobs via Dice

Reston · On-site Full-time Today

About the role

Job Description

Seeking an AWS Data Engineer to design, build, and maintain scalable data pipelines and ETL solutions using Python/PySpark and AWS managed services to support analytics and data product needs.

Key Responsibilities

  • Build and maintain ETL pipelines using Python and PySpark on AWS Glue and other compute platforms
  • Orchestrate workflows with AWS Step Functions and serverless components (Lambda)
  • Implement messaging and event-driven patterns using AWS SNS and SQS
  • Design and optimize data storage and querying in Amazon Redshift
  • Write performant SQL for data transformations, validation, and reporting
  • Ensure data quality, monitoring, error handling, and operational support for pipelines
  • Collaborate with data consumers, engineers, and stakeholders to translate requirements into solutions
  • Contribute to CI/CD, infrastructure-as-code, and documentation for reproducible deployments

Required Skills

  • Strong experience with Python and PySpark for large-scale data processing
  • Proven hands-on experience with AWS services: Lambda, SNS, SQS, Glue, Redshift, Step Functions
  • Solid SQLSQL skills and familiarity with data modeling and query optimization
  • Experience with ETL best practices, data quality checks, and monitoring/alerting
  • Familiarity with version control (Git) and basic DevOps/CI-CD workflows

Skills

AWS GlueAWS LambdaAWS RedshiftAWS SNSAWS SQSAWS Step FunctionsCI/CDDockerGitPythonPySparkSQL

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free