Skip to content
mimi

Azure Data Engineer

Enzo Tech Group

Canada · On-site Full-time Yesterday

About the role

About

Our client is undergoing a large-scale data transformation, with multiple concurrent data platform builds and a strategic move toward Microsoft Fabric and Azure cloud technologies.

This role focuses on building modern, scalable data pipelines from scratch, with the majority of work being greenfield rather than migration.

Key Responsibilities

  • Design, build, and operate end-to-end data pipelines across cloud and hybrid environments (Azure & Microsoft Fabric).
  • Develop batch and real-time data ingestion, transformation, and delivery solutions using modern data platform patterns (lakehouse / warehouse).
  • Work with structured and semi-structured data, applying strong data modelling and engineering principles.
  • Contribute to the implementation of scalable data platforms aligned with enterprise architecture standards.
  • Support and modernise legacy/on-prem data systems as part of a broader cloud transformation strategy.
  • Monitor and maintain data pipeline performance, ensuring reliability, cost-efficiency, and stability.
  • Troubleshoot and resolve data issues using logging, alerting, and root cause analysis.
  • Collaborate closely with business stakeholders to translate requirements into effective data solutions.
  • Document processes, pipelines, and runbooks to support ongoing operations and handovers.

Experience & Skills Required

  • Strong experience building data pipelines within Azure environments (ADF, Synapse, Fabric).
  • Hands-on experience with Microsoft Fabric (highly desirable / priority skill).
  • Proficiency in Python and/or PySpark for data processing.
  • Strong SQL skills and understanding of relational data modelling.
  • Experience working with data lake / lakehouse architectures and distributed processing.
  • Exposure to hybrid environments (cloud + on-prem systems).
  • Experience with Microsoft Purview or data governance tools is a plus.
  • Familiarity with tools like WhereScape RED is beneficial but not essential.
  • Strong communication skills — able to engage with both technical and non-technical stakeholders.
  • Ability to work in fast-paced environments with multiple concurrent data projects.

Project Overview

You will play a key role in shaping how data is engineered and consumed across the organisation, working closely with both technical teams and business stakeholders.

This is an opportunity to work on cutting-edge data technology in a high-impact environment, with strong potential for long-term growth and ownership.

Requirements

  • Strong experience building data pipelines within Azure environments (ADF, Synapse, Fabric).
  • Hands-on experience with Microsoft Fabric (highly desirable / priority skill).
  • Proficiency in Python and/or PySpark for data processing.
  • Strong SQL skills and understanding of relational data modelling.
  • Experience working with data lake / lakehouse architectures and distributed processing.
  • Exposure to hybrid environments (cloud + on-prem systems).
  • Strong communication skills — able to engage with both technical and non-technical stakeholders.
  • Ability to work in fast-paced environments with multiple concurrent data projects.

Responsibilities

  • Design, build, and operate end-to-end data pipelines across cloud and hybrid environments (Azure & Microsoft Fabric).
  • Develop batch and real-time data ingestion, transformation, and delivery solutions using modern data platform patterns (lakehouse / warehouse).
  • Work with structured and semi-structured data, applying strong data modelling and engineering principles.
  • Contribute to the implementation of scalable data platforms aligned with enterprise architecture standards.
  • Support and modernise legacy/on-prem data systems as part of a broader cloud transformation strategy.
  • Monitor and maintain data pipeline performance, ensuring reliability, cost-efficiency, and stability.
  • Troubleshoot and resolve data issues using logging, alerting, and root cause analysis.
  • Collaborate closely with business stakeholders to translate requirements into effective data solutions.
  • Document processes, pipelines, and runbooks to support ongoing operations and handovers.

Skills

ADFAzureData modelingFabricLakehouseMicrosoft FabricOn-prem systemsPythonPySparkSQLSynapseWarehouse

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free