Skip to content
mimi

ZEN102 - Azure Data Engineer

Data Monkey (PTY) LTD

On-site Full-time Senior 4d ago

About the role

About

We are seeking an experienced Azure Data Engineer to join a high-performing data engineering team based in Sandton, Johannesburg. The role focuses on designing, building, and maintaining scalable, secure, and high-performance data platforms within the Azure ecosystem. The successful candidate must have strong expertise in Azure data services, advanced T-SQL, Python-based ETL development, and modern data warehouse modelling methodologies.

Responsibilities

  • Design, develop, and maintain end-to-end Azure data pipelines using ADF, Databricks, Synapse Analytics, and ADLS Gen2
  • Develop robust and performant ETL/ELT processes using Python and SQL
  • Perform advanced data analysis and exploration using T-SQL
  • Implement automated testing frameworks for data pipelines to ensure data quality and reliability
  • Develop and optimise SQL stored procedures, functions, and transformations
  • Apply data warehouse modelling methodologies including Kimball, Dimensional Modelling, and Data Vault 2.0
  • Build and manage data solutions leveraging Azure Data Platform products and Microsoft Fabric
  • Integrate source control and manage deployments using Azure DevOps (ADO)
  • Design and support CI/CD and deployment pipelines for data engineering solutions
  • Ensure adherence to data governance, security, and compliance practices, including awareness of tools such as Azure Purview

Competency

  • Strong front-end engineering and UI development capability is mandatory
  • Excellent problem-solving and analytical skills are mandatory
  • Ability to work effectively in collaborative Agile teams is mandatory
  • Strong ownership, accountability, and delivery focus is mandatory
  • Good communication skills across technical and non-technical stakeholders are mandatory
  • Ability to work independently and take initiative in complex problem-solving is mandatory
  • Strong self-motivation with a continuous improvement mindset is mandatory

Technical Skills

  • 7+ years total experience in data engineering is mandatory
  • 3–4+ years expert-level experience as an Azure Data Engineer is mandatory
  • Advanced proficiency in T-SQL for data analysis and transformation is mandatory
  • Strong Python skills for ETL development and data wrangling, especially in Databricks, are mandatory
  • Experience writing automated tests for data pipelines is mandatory
  • Strong SQL programming skills, including stored procedures and functions, are mandatory
  • Proven experience with data warehouse modelling (Kimball, Dimensional Modelling, Data Vault 2.0) is mandatory
  • Hands-on experience with Azure data services including ADF, Databricks, Synapse Analytics, and ADLS Gen2 is mandatory
  • Experience building robust, scalable, and high-performance ETL pipelines is mandatory
  • Experience with Azure Data Platform products and Microsoft Fabric is mandatory
  • Proficiency with source control and Azure DevOps (ADO) is mandatory
  • Awareness of data governance tools and practices, such as Azure Purview, is mandatory
  • Solid understanding of deployment pipelines and CI/CD practices is mandatory

General Information

  • Contract: Subcontract - 12-month contract – Renewable or Permanent
  • Location: Sandton, Johannesburg
  • Mode of Work: Work from office

Requirements

  • Strong front-end engineering and UI development capability is mandatory
  • Excellent problem-solving and analytical skills are mandatory
  • Ability to work effectively in collaborative Agile teams is mandatory
  • Strong ownership, accountability, and delivery focus is mandatory
  • Good communication skills across technical and non-technical stakeholders are mandatory
  • Ability to work independently and take initiative in complex problem-solving is mandatory
  • Strong self-motivation with a continuous improvement mindset is mandatory

Responsibilities

  • Design, develop, and maintain end-to-end Azure data pipelines using ADF, Databricks, Synapse Analytics, and ADLS Gen2
  • Develop robust and performant ETL/ELT processes using Python and SQL
  • Perform advanced data analysis and exploration using T-SQL
  • Implement automated testing frameworks for data pipelines to ensure data quality and reliability
  • Develop and optimise SQL stored procedures, functions, and transformations
  • Apply data warehouse modelling methodologies including Kimball, Dimensional Modelling, and Data Vault 2.0
  • Build and manage data solutions leveraging Azure Data Platform products and Microsoft Fabric
  • Integrate source control and manage deployments using Azure DevOps (ADO)
  • Design and support CI/CD and deployment pipelines for data engineering solutions
  • Ensure adherence to data governance, security, and compliance practices, including awareness of tools such as Azure Purview

Skills

ADFADLS Gen2ADOAzureAzure DevOpsAzure PurviewAzure Synapse AnalyticsDatabricksData Vault 2.0Dimensional ModellingKimballMicrosoft FabricPythonSQLT-SQL

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free