Skip to content
mimi

Data Engineer with Azure Databricks Expertise

Myticas Consulting

Canada · On-site Today

About the role

Leverage Databricks to engineer robust data solutions focusing on ETL and data processing. Skilled use of PySpark and experience in lakehouse environments is essential. In this role, you will architect and optimize data pipelines and workflows in collaboration with various technologies. You'll work primarily with Azure Data Lake Storage and Azure Data Factory while using your background in Python and SQL to enhance data processing efficiency. Familiarity with data modeling and performance enhancement techniques will also be necessary in this position.

Key Responsibilities

  • Construct and maintain ETL/ELT data pipelines
  • Execute data transformations using Databricks
  • Manage scalable storage with Azure Data Lake
  • Schedule workflows using Azure Data Factory
  • Collaborate with BI systems for data integration

Requirements

  • Hands-on experience with Databricks and Delta Lake
  • Proficient in SQL and Python is a must
  • Experience with Azure-based orchestration tools
  • Exposure to version control and CI/CD methodologies
  • Databricks certifications preferred for validation

Enhance your data engineering career by employing innovative Databricks solutions in collaborative settings.

Skills

Azure Data FactoryAzure Data Lake StorageCI/CDDatabricksDelta LakeETLELTPythonPySparkSQLVersion Control

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free