Skip to content
mimi

Sr. Data Engineer

Advanced Tech Placement

Parsippany-Troy Hills · On-site Contract Senior 2w ago

About the role

Job Overview

We are looking for a highly skilled Data Engineer with deep hands‑on experience in Python, PySpark, Databricks, and DynamoDB to help build and optimize modern data pipelines at scale. This role is ideal for someone who excels in high‑performance data engineering, thrives in a collaborative environment, and can leverage emerging AI tools (such as Copilot) to improve delivery speed and code quality.

Required Qualifications

  • 8 years of professional experience in data engineering or related field.
  • Strong proficiency in:
    • Python
    • PySpark
    • Databricks (Delta Lake, notebooks, jobs, cluster management)
  • Hands‑on experience with AWS DynamoDB in production environments.
  • Experience building scalable data pipelines for ingestion, transformation, and retrieval.
  • Strong understanding of DataFrame optimizations, performance tuning, and distributed computing best practices.
  • Familiarity with GitHub, JIRA, and Confluence for version control and team collaboration.
  • Strong communication skills and ability to partner with both technical and business stakeholders.

Nice to Have

  • Experience using AI‑assisted coding tools such as GitHub Copilot, Amazon Q, or similar LLM‑based development accelerators.
  • Exposure to AWS services beyond DynamoDB (S3, Lambda, Glue, EMR, etc.), though not required.
  • Familiarity with CI/CD tools such as Jenkins or infrastructure‑as‑code tools like Terraform.
  • Experience with monitoring tools (Splunk, Dynatrace), though these are not primary requirements.

Key Responsibilities

  • Design, develop, and maintain data ingestion and transformation pipelines using Python, PySpark, and Databricks.
  • Build and optimize workflows for reliable, scalable, and performant data processing.
  • Work with AWS DynamoDB for production use cases involving NoSQL storage and retrieval.
  • Collaborate closely with cross‑functional partners (engineering, product, analytics) to translate business needs into technical solutions.
  • Leverage AI developer tools (e.g., GitHub Copilot, Amazon Q) to accelerate development, improve code quality, and support team productivity.
  • Ensure high standards for data quality, documentation, testing, and maintainability.
  • Participate in code reviews, design sessions, and continuous improvement efforts.

Requirements

  • 8 years of professional experience in data engineering or related field.
  • Strong proficiency in Python.
  • Strong proficiency in PySpark.
  • Strong proficiency in Databricks (Delta Lake, notebooks, jobs, cluster management).
  • Hands‐on experience with AWS DynamoDB in production environments.
  • Experience building scalable data pipelines for ingestion, transformation, and retrieval.
  • Strong understanding of DataFrame optimizations, performance tuning, and distributed computing best practices.
  • Familiarity with GitHub, JIRA, and Confluence for version control and team collaboration.
  • Strong communication skills and ability to partner with both technical and business stakeholders.

Responsibilities

  • Design, develop, and maintain data ingestion and transformation pipelines using Python, PySpark, and Databricks.
  • Build and optimize workflows for reliable, scalable, and performant data processing.
  • Work with AWS DynamoDB for production use cases involving NoSQL storage and retrieval.
  • Collaborate closely with cross‐functional partners (engineering, product, analytics) to translate business needs into technical solutions.
  • Leverage AI developer tools (e.g., GitHub Copilot, Amazon Q) to accelerate development, improve code quality, and support team productivity.
  • Ensure high standards for data quality, documentation, testing, and maintainability.
  • Participate in code reviews, design sessions, and continuous improvement efforts.

Skills

AWS DynamoDBDatabricksDelta LakeGitHubGitHub CopilotJIRALambdaPythonPySpark

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free