Skip to content
mimi

Interim Azure Data Platform Administrator / Data Engineer (gn)

Michael Page

Remote (Global) 3d ago

About the role

About

  • Exciting Company
  • Exciting Opportunities

Project Details

  • Start: 30 April 2026 (or earlier if possible)
  • Project Duration: Until 24 December 2026
  • Workload: 16-18 hours per week (min. 2 days/week, ideally 3-4 hours/day)
  • Location: Remote
  • Industry: Engineering / Technology / Data Platforms
  • Project Language: English (fluent)

Responsibilities

  • Design and implement scalable data storage solutions in Azure
  • Develop data processing solutions across structured, unstructured, and streaming sources
  • Integrate, transform, and consolidate data to deliver analytics‑ready models
  • Design and operate secure, compliant, and high‑performing data pipelines
  • Optimize platform stability, system efficiency, and data quality
  • Implement file partitioning strategies in Azure Synapse Analytics
  • Identify partitioning requirements in ADLS Gen2
  • Use SQL Serverless and Spark clusters to create and run queries
  • Develop and operate incremental data loads
  • Transform data using Apache Spark and/or T‑SQL
  • Ingest and transform data using Synapse Pipelines
  • Implement duplicate‑handling and error‑handling mechanisms
  • Develop batch processing solutions using ADLS Gen2
  • Work with Delta Lake (read/write, incremental updates)
  • Implement and operate Azure Synapse Link
  • Create, schedule, and monitor data pipelines
  • Integrate Python notebooks into data pipelines
  • Trigger and validate batch runs; handle failures
  • Monitor and optimize pipeline and query performance

Requirements

Required Qualifications

  • 3+ years of hands‑on experience with Azure Synapse Workspace (pipelines, notebooks, SQL endpoints, lake database)
  • Strong SQL and Python experience
  • Deep practical experience with:
    • Pipelines
    • Notebooks
    • ADLS Gen2
    • SQL endpoints
    • Lake Database
  • Strong English communication skills (written & spoken)
  • Experience in designing, building, and operating modern Azure data platforms
  • Ability to independently diagnose and optimize data pipelines and storage strategies

Nice‑to‑Have Skills

  • Airflow
  • Spark Delta Table libraries
  • Certifications such as DP‑700 (Microsoft Fabric Data Engineering)

Compensation

Does the project sound interesting?

I look forward to your response with the following information:

  • Your earliest availability
  • Your maximum weekly capacity
  • Can you regularly deliver this workload?
  • Your hourly rate (remote)
  • Your current profile (PDF)
  • A brief comment on your suitability (referencing the tasks & requirements listed above)

Skills

ADLS Gen2Apache SparkAzure Synapse AnalyticsAzure Synapse LinkDelta LakePythonSQLSQL ServerlessSpark

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free