Skip to content
mimi

Data Engineer IV

Robert Half

Philadelphia · Hybrid Senior 1w ago

About the role

Senior Data Engineer

Location: Philadelphia, PA (Hybrid/Onsite as required)
Employment Type: 39 Week Contract, Potential for Extension
Project Focus: Salesforce → Databricks Data Migration

About the Role

We are seeking a Senior Data Engineer to support a major Salesforce data migration initiative. This role is centered around building, optimizing, and maintaining high‑quality data pipelines that feed into Databricks, with a strong emphasis on Spark/PySpark and Python-based ETL development. The engineer will work closely with a senior team member, participate in Agile ceremonies, and contribute to the development of a core CRM data platform.

Key Responsibilities

Data Engineering & Development

  • Develop ETL jobs and data pipelines that migrate and integrate data between Salesforce, AWS, and Databricks.
  • Build, test, and maintain scalable data pipelines on AWS + Databricks environments.
  • Use Python as a primary language for data engineering tasks and ETL job creation.
  • Utilize Spark and PySpark for all high‑volume processing and transformation work (must‑have).
  • Support integration and pipeline development, including Mulesoft-related components.
  • Conduct documentation, testing, QA, and post‑delivery support for all data engineering outputs.
  • Identify and mitigate risks, including eliminating single points of failure (SPOFs).

Infrastructure & DevOps Collaboration

  • Use Terraform for infrastructure provisioning and environment management.
  • Set up and manage CI/CD pipelines using Concourse or GitHub Actions to ensure consistent and reliable deployments.
  • Troubleshoot pipeline issues, resolve defects efficiently, and maintain reliable operations.

Cross‑Team Collaboration

  • Partner with engineering, architecture, and technical product teams to translate requirements into scalable data solutions.
  • Contribute to best practices, knowledge‑sharing, and continuous improvement across the engineering organization.
  • Participate in weekly Scrum ceremonies and collaborate in an Agile environment.

Requirements

  • Spark and PySpark for all high‑volume processing and transformation work (must‑have).

Responsibilities

  • Develop ETL jobs and data pipelines that migrate and integrate data between Salesforce, AWS, and Databricks.
  • Build, test, and maintain scalable data pipelines on AWS + Databricks environments.
  • Use Python as a primary language for data engineering tasks and ETL job creation.
  • Utilize Spark and PySpark for all high‑volume processing and transformation work (must‑have).
  • Support integration and pipeline development, including Mulesoft-related components.
  • Conduct documentation, testing, QA, and post‑delivery support for all data engineering outputs.
  • Identify and mitigate risks, including eliminating single points of failure (SPOFs).
  • Use Terraform for infrastructure provisioning and environment management.
  • Set up and manage CI/CD pipelines using Concourse or GitHub Actions to ensure consistent and reliable deployments.
  • Troubleshoot pipeline issues, resolve defects efficiently, and maintain reliable operations.
  • Partner with engineering, architecture, and technical product teams to translate requirements into scalable data solutions.
  • Contribute to best practices, knowledge-sharing, and continuous improvement across the engineering organization.
  • Participate in weekly Scrum ceremonies and collaborate in an Agile environment.

Skills

AWSConcourseDatabricksETLGitHub ActionsMulesoftPythonSalesforceSparkSQLTerraform

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free