Skip to content
mimi

Data Engineer- Healthcare

Salt

UAE · On-site Today

About the role

🚀 Data Engineer

My client is looking for a Data Engineer to design and build scalable data pipelines that deliver trusted, analytics-ready datasets for BI, AI, and operational use cases across a hybrid environment. • ** Must have healthcare/healthtech experience ***

🔧 Key Responsibilities • Build pipelines across bronze, silver & gold layers (Databricks, Spark, dbt) • Implement data quality checks, contracts & schema validation • Apply governance (catalog, lineage, RBAC, metadata) • Deliver curated datasets, features & embeddings for AI/BI • Monitor pipeline health, performance & cost to meet SLAs

⚙️ Tech Stack

Databricks • Spark • Delta Lake • dbt • Azure Data Factory • Kafka/Event Hubs • CI/CD (Azure DevOps/GitHub)

🔐 Governance & Ops • Enforce data contracts, lineage & cataloging • Apply masking, tokenisation & access controls (PII/PHI) • Build observable pipelines with alerts, dashboards & runbooks • Optimize performance (partitioning, caching, cost efficiency)

✅ Requirements • 5+ years in Data Engineering • Strong SQL, data modeling (dimensional/data vault) • Proficiency in Python • Hands-on with Databricks, Spark, Delta Lake & dbt • Experience with Azure data services (ADF, ADLS, Key Vault) • Familiarity with CI/CD & container basics (Docker/Kubernetes)

➕ Nice to Have • Streaming (Kafka/Event Hubs) & CDC (GoldenGate) • Catalog/lineage tools (Purview, OvalEdge) • S3-compatible storage (MinIO, VAST) • Exposure to BI tools (Power BI) & healthcare standards (FHIR/MDR)

🎓 Education

Bachelor’s in Computer Science, Engineering, or related field • ** Only successful candidates will be contacted ***

Requirements

  • bachelor's in computer science, engineering, or related field
  • 5+ years in data engineering
  • strong sql, data modeling (dimensional/data vault)
  • proficiency in python
  • hands-on with databricks, spark, delta lake & dbt
  • experience with azure data services (adf, adls, key vault)
  • familiarity with ci/cd & container basics (docker/kubernetes)

Responsibilities

  • build pipelines across bronze, silver & gold layers (databricks, spark, dbt)
  • implement data quality checks, contracts & schema validation
  • apply governance (catalog, lineage, rbac, metadata)
  • deliver curated datasets, features & embeddings for ai/bi
  • monitor pipeline health, performance & cost to meet slas

Skills

databrickssparkdelta lakedbtazure data factorykafka/event hubsci/cd (azure devops/github)pythonsqldata modelingazure data servicesci/cdcontainer basics

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free