Skip to content
mimi

Senior Cloud Data Engineer

Hire Resolve

Remote · South Africa Full-time Senior 6d ago

About the role

About

Hire Resolve's client is looking for a Senior Cloud Engineer (Remote) to join their dynamic and growing Engineering team.

The ideal candidate will have strong expertise in Databricks, Spark, Lakehouse architectures, Delta Lake tables, and other modern data engineering technologies. They should be proficient in T‑SQL and Python, with advanced coding abilities, and possess a deep understanding of cloud‑based and cloud‑agnostic data architectures. This is a senior role that requires the ability to lead projects, mentor junior team members, and work independently on complex data engineering challenges.

Responsibilities

Data Engineering & Cloud Solutions

  • Architect, build, and optimize scalable and cloud‑agnostic data solutions using Azure, Databricks, Spark, Lakehouse, and Delta Lake tables.
  • Develop, implement, and maintain big data pipelines for ingesting, processing, and storing large volumes of structured and unstructured data.
  • Manage and optimize data lake and data warehouse architectures for performance, cost, and scalability.

Cloud & DevOps

  • Work within Azure environments (Azure Synapse, Data Factory, ADLS, etc.) to develop and maintain cloud‑based data solutions.
  • Implement best DevOps practices for CI/CD pipelines, infrastructure‑as‑code, and automation.
  • Utilize Azure DevOps and Git for managing code repositories, version control, and continuous integration/deployment.
  • Ensure high levels of security, compliance, and data governance across data engineering processes.

Big Data Processing & Development

  • Utilize Spark, Databricks, and distributed computing to process and analyse large datasets efficiently.
  • Write advanced Python and T‑SQL scripts for data transformations, ETL/ELT processes, and real‑time data processing.
  • Optimize performance for data pipelines and SQL queries for efficiency and cost‑effectiveness.
  • Experience with Graph databases is a big advantage.

Collaboration & Leadership

  • Work closely with data scientists, analysts, and business stakeholders to understand data requirements and develop solutions that meet business objectives.
  • Lead initiatives to enhance data engineering capabilities, introduce new technologies, and drive best practices.
  • Mentor junior engineers, conduct code reviews, and contribute to building a culture of technical excellence.
  • Communicate effectively with technical and non‑technical stakeholders, translating complex data concepts into actionable insights.

Requirements

Education & Experience

  • 6+ years of extensive experience.
  • Computer science degree or comparable data engineering certification in one of the cloud platforms.

Minimum Required Skills and Experience

  • Extensive experience in cloud‑based data engineering, with expertise in Azure (Azure Synapse, Azure Data Factory, ADLS, etc.).
  • Strong expertise in cloud‑agnostic data tools such as Databricks, Spark, Delta Lake, and Lakehouse architectures.
  • Advanced proficiency in T‑SQL and Python for developing complex data pipelines, transformations, and optimizations.
  • Hands‑on experience in big data processing, ETL/ELT, and data pipeline orchestration.
  • Solid understanding of data modelling, warehouse design, and data lakehouse architecture.
  • Experience in setting up, managing, and processing with Azure DevOps (or similar DevOps tools).
  • Strong knowledge of DevOps, CI/CD, Git, and infrastructure‑as‑code for automated deployments.
  • Excellent verbal and written communication skills, with the ability to collaborate across teams and explain complex data concepts clearly.
  • Ability to work independently on high‑impact projects while maintaining a team‑oriented approach.

How to Apply

Interested applicants should submit their CV to Gaby Turner at gaby.turner@hireresolve.us or you may forward your CV directly to the IT Department at itcareers@hireresolve.za.com.

#J-18808-Ljbffr

Requirements

  • 6+ years of extensive experience.
  • Computer science degree or comparable data engineering certification in one of the cloud platforms.
  • Extensive experience in cloud‑based data engineering, with expertise in Azure (Azure Synapse, Azure Data Factory, ADLS, etc.).
  • Strong expertise in cloud‑agnostic data tools such as Databricks, Spark, Delta Lake, and Lakehouse architectures.
  • Advanced proficiency in T‑SQL and Python for developing complex data pipelines, transformations, and optimizations.
  • Hands‑on experience in big data processing, ETL/ELT, and data pipeline orchestration.
  • Solid understanding of data modelling, warehouse design, and data lakehouse architecture.
  • Experience in setting up, managing, and processing with Azure DevOps (or similar DevOps tools).
  • Strong knowledge of DevOps, CI/CD, Git, and infrastructure‑as‑code for automated deployments.
  • Excellent verbal and written communication skills, with the ability to collaborate across teams and explain complex data concepts clearly.
  • Ability to work independently on high‑impact projects while maintaining a team‑oriented approach.

Responsibilities

  • Architect, build, and optimize scalable and cloud‑agnostic data solutions using Azure, Databricks, Spark, Lakehouse, and Delta Lake tables.
  • Develop, implement, and maintain big data pipelines for ingesting, processing, and storing large volumes of structured and unstructured data.
  • Manage and optimize data lake and data warehouse architectures for performance, cost, and scalability.
  • Work within Azure environments (Azure Synapse, Data Factory, ADLS, etc.) to develop and maintain cloud‑based data solutions.
  • Implement best DevOps practices for CI/CD pipelines, infrastructure‑as‑code, and automation.
  • Utilize Azure DevOps and Git for managing code repositories, version control, and continuous integration/deployment.
  • Ensure high levels of security, compliance, and data governance across data engineering processes.
  • Utilize Spark, Databricks, and distributed computing to process and analyse large datasets efficiently.
  • Write advanced Python and T‑SQL scripts for data transformations, ETL/ELT processes, and real‑time data processing.
  • Optimize performance for data pipelines and SQL queries for efficiency and cost‑effectiveness.
  • Work closely with data scientists, analysts, and business stakeholders to understand data requirements and develop solutions that meet business objectives.
  • Lead initiatives to enhance data engineering capabilities, introduce new technologies, and drive best practices.
  • Mentor junior engineers, conduct code reviews, and contribute to building a culture of technical excellence.
  • Communicate effectively with technical and non‑technical stakeholders, translating complex data concepts into actionable insights.

Skills

ADLSAzureAzure Data FactoryAzure DevOpsAzure SynapseCI/CDDatabricksDelta LakeDevOpsETLGitGraph databasesInfrastructure-as-codeLakehousePythonSparkSQLT-SQL

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free