Skip to content
mimi

Microsoft Azure Cloud Engineer

Explarity Solutions

Pune · On-site Full-time Mid Level Today

About the role

About the Role

We are seeking a highly skilled Azure Data Engineer with expertise in Microsoft Fabrics, Azure Databricks, and Microsoft Synapse Analytics. The ideal candidate will be responsible for designing, implementing, and optimizing cloud‑based data solutions within the Microsoft ecosystem. This role requires strong technical knowledge of Azure data services, data warehousing, and big data processing to support business intelligence, analytics, and AI initiatives.

Key Responsibilities

  • Design and develop end‑to‑end data solutions using Azure Data Factory (ADF), Azure Databricks, Synapse Analytics, and Azure SQL.
  • Implement ETL/ELT pipelines to extract, transform, and load data from multiple sources into Azure‑based data lakes and warehouses.
  • Optimize big data processing using Apache Spark, Delta Lake, and Databricks.
  • Configure and manage Synapse Analytics (formerly Azure SQL Data Warehouse) for scalable data storage and querying.
  • Develop data models, indexing strategies, and performance tuning for efficient querying.
  • Work with Azure Blob Storage, ADLS (Azure Data Lake Storage), and Parquet/ORC file formats for data storage.
  • Implement data governance, security, and compliance policies using Azure Purview, RBAC, and Data Masking.
  • Automate workflows using Azure DevOps, CI/CD pipelines, and Infrastructure as Code (Terraform/ARM templates).
  • Collaborate with data analysts, business intelligence teams, and cloud architects to deliver analytics‑driven solutions.
  • Troubleshoot performance bottlenecks, optimize queries, and ensure data reliability and availability.

Requirements

  • Bachelor's degree in Computer Science, Data Engineering, IT, or a related field.
  • Minimum 3 years of relevant experience in data engineering with Microsoft Azure data services.
  • Strong expertise in Microsoft Fabrics, Azure Databricks, Apache Spark, and Synapse Analytics.
  • Proficiency in SQL, Python, Scala, or PySpark for data processing.
  • Hands‑on experience with Azure Data Factory (ADF), Azure Functions, and Event Hubs.
  • Knowledge of data warehouse concepts, star/snowflake schema, and performance tuning.
  • Experience with Azure security best practices (RBAC, Managed Identities, Data Encryption).
  • Familiarity with machine learning and AI integration in Databricks is a plus.

Requirements

  • Bachelor's degree in Computer Science, Data Engineering, IT, or a related field

Responsibilities

  • Design and develop end‑to‑end data solutions using Azure Data Factory (ADF), Azure Databricks, Synapse Analytics, and Azure SQL.
  • Implement ETL/ELT pipelines to extract, transform, and load data from multiple sources into Azure‑based data lakes and warehouses.
  • Optimize big data processing using Apache Spark, Delta Lake, and Databricks.
  • Configure and manage Synapse Analytics (formerly Azure SQL Data Warehouse) for scalable data storage and querying.
  • Develop data models, indexing strategies, and performance tuning for efficient querying.
  • Work with Azure Blob Storage, ADLS (Azure Data Lake Storage), and Parquet/ORC file formats for data storage.
  • Implement data governance, security, and compliance policies using Azure Purview, RBAC, and Data Masking.
  • Automate workflows using Azure DevOps, CI/CD pipelines, and Infrastructure as Code (Terraform/ARM templates).
  • Collaborate with data analysts, business intelligence teams, and cloud architects to deliver analytics‑driven solutions.
  • Troubleshoot performance bottlenecks, optimize queries, and ensure data reliability and availability.

Skills

Microsoft FabricsAzure DatabricksApache SparkSynapse AnalyticsSQLPythonScalaPySparkAzure Data Factory (ADF)Azure FunctionsEvent HubsData warehouse concepts (star/snowflake schema)Performance tuningAzure security best practices (RBAC, Managed Identities, Data Encryption)Azure PurviewData MaskingAzure DevOpsCI/CD pipelinesTerraformARM templatesAzure Blob StorageAzure Data Lake Storage (ADLS)Parquet/ORC file formats

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free