Skip to content
mimi

Data Engineers (AWS)

Jobs via Dice

Newark · Hybrid Full-time Mid Level Yesterday

About the role

Job Summary

We are seeking a talented AWS Data Engineer to join our dynamic Data Engineering team. The ideal candidate will be responsible for designing, developing, and maintaining scalable data pipelines and architectures in the AWS cloud environment. This role will collaborate closely with data scientists, analysts, and other business stakeholders to deliver robust data solutions.

Key Responsibilities

  • Design, build, and maintain efficient, reusable, and reliable architecture and code for data pipelines and data applications on AWS.
  • Build robust data ingestion pipelines (from on-prem to AWS and within AWS) using AWS services such as Glue, Redshift, S3, Lambda, EMR/Spark, Kinesis, and SQS.
  • Develop and manage ETL/ELT processes to collect, process, and store data from multiple sources, ensuring data quality, integrity, and security.
  • Architect and implement end-to-end data solutions (ingestion, storage, integration, processing, access) on AWS, with a focus on data lakes and data warehouses.
  • Participate in the architecture and system design discussions for high-scale data engineering projects.
  • Independently perform hands-on development, unit testing, and participate in code reviews to ensure adherence to best practices.
  • Implement serverless applications using AWS Lambda, API Gateway, Step Functions, and other AWS technologies.
  • Migrate data from traditional relational databases, file systems, and APIs to AWS-based data lakes (S3), RDS, Aurora, and Redshift.
  • Implement high-velocity streaming solutions using Amazon Kinesis, SQS, and Kafka (preferred).
  • Architect and implement CI/CD strategies for enterprise data platforms.
  • Collaborate with product, operations, QA, and cross-functional teams throughout the software development cycle.
  • Stay abreast of new technology developments, implement POCs for new tools/technologies, and onboard them for real-world use cases.
  • Identify and resolve performance issues and continuously optimize for cost, reliability, and scalability.

Required Qualifications

  • 3+ years of experience implementing and supporting data lakes, data warehouses, and data applications on AWS for large enterprises.
  • Strong programming experience with Python, Shell scripting, and SQL.
  • Solid experience with AWS services: CloudFormation, S3, Athena, Glue, EMR/Spark, RDS, Redshift, DynamoDB, Lambda, Step Functions, IAM, KMS, Secrets Manager.
  • Experience in serverless application development and data pipeline orchestration.
  • Experience in system analysis, design, development, and implementation of data ingestion pipelines in AWS.
  • Knowledge of ETL/ELT, data modeling, and big data technologies.
  • Familiarity with data warehousing concepts and cloud-based architecture.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and teamwork abilities.

Preferred Qualifications

  • Experience with additional AWS services: API Gateway, Elasticsearch, SQS.
  • Experience with infrastructure-as-code tools (e.g., Terraform, CloudFormation).
  • Experience with DevOps practices and CI/CD pipelines.
  • Experience implementing end-to-end streaming solutions (Amazon Kinesis, SQS, Kafka).
  • AWS Solutions Architect or AWS Developer Certification preferred.
  • Understanding of Lakehouse/data cloud architecture.
  • Knowledge of data governance and compliance standards.

Skills

API GatewayAthenaAuroraCloudFormationDockerDynamoDBEMRElasticsearchGlueIAMKinesisKMSLambdaPythonRDSRedshiftS3Secrets ManagerShell scriptingSparkSQLSQSStep FunctionsTerraform

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free