Skip to content
mimi

Data Engineer - AZURE

DATABEAT

India · On-site Full-time Senior Today

About the role

Job Title

Data Engineer – Azure

Location

Hyderabad

Experience Required

5+ years

Role Summary

We are looking for a skilled Azure Data Engineer with 5+ years of hands‑on experience in building, maintaining, and optimizing data pipelines and analytics solutions on Microsoft Azure. The ideal candidate will work closely with senior engineers, data scientists, and business teams to deliver reliable and scalable data solutions. This role focuses on development, optimization, and support of cloud‑based data platforms while following best practices in data engineering and DevOps.

Technical Skills

  • Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure SQL Database, ADLS Gen2, Azure Functions, Azure Event Hub, Azure Stream Analytics, Azure DevOps, GitHub, CI/CD Pipelines, ARM Templates, Terraform, Power BI, Azure Purview, Apache Spark, PySpark, SQL, Python.

Key Responsibilities

  • Design, develop, and maintain scalable data pipelines using Azure Data Factory, Databricks (PySpark), SQL, and Python.
  • Assist in migrating on‑premise and legacy data systems to Azure Data Lake and Azure Synapse Analytics.
  • Optimize data processing workflows for performance, reliability, and cost efficiency.
  • Implement data validation, quality checks, and monitoring for production pipelines.
  • Build and support batch and near real‑time data processing solutions using Event Hub and Stream Analytics.
  • Collaborate with data analysts and data scientists to deliver curated and analytics‑ready datasets.
  • Translate business requirements into technical data solutions under guidance from senior architects.
  • Implement security, governance, and access control using Azure‑native tools and best practices.
  • Support CI/CD pipelines and infrastructure automation using Azure DevOps, GitHub, ARM, or Terraform.
  • Participate in code reviews and contribute to documentation and knowledge sharing.
  • Troubleshoot and resolve data pipeline failures and performance issues.

Requirements

  • 5+ years of hands‑on experience as a Data Engineer with strong exposure to Azure services.
  • Experience building ETL/ELT pipelines, data warehouses, and data lakes on Azure.
  • Hands‑on experience with Azure Data Factory, Databricks, ADLS Gen2, Azure SQL, and Synapse Analytics.
  • Strong knowledge of Apache Spark, PySpark, SQL, and Python.
  • Experience with modern data architectures such as Lakehouse and event‑driven systems.
  • Exposure to Infrastructure as Code using ARM Templates or Terraform.
  • Working knowledge of CI/CD pipelines using Azure DevOps or GitHub Actions.
  • Familiarity with Azure Purview and role‑based access control (RBAC).
  • Strong analytical and problem‑solving skills.
  • Good communication skills and ability to work collaboratively in a team environment.

Preferred Skills

  • Exposure to Power BI, Snowflake, or other analytical tools.
  • Basic understanding of streaming technologies like Kafka or Delta Lake.
  • Experience with Docker, Kubernetes, or AKS is a plus.
  • Willingness to learn new Azure services and data engineering best practices.

Requirements

  • 5+ years of hands-on experience as a Data Engineer with strong exposure to Azure services.
  • Experience building ETL/ELT pipelines, data warehouses, and data lakes on Azure.
  • Hands-on experience with Azure Data Factory, Databricks, ADLS Gen2, Azure SQL, and Synapse Analytics.
  • Strong knowledge of Apache Spark, PySpark, SQL, and Python.
  • Experience with modern data architectures such as Lakehouse and event-driven systems.
  • Exposure to Infrastructure as Code using ARM Templates or Terraform.
  • Working knowledge of CI/CD pipelines using Azure DevOps or GitHub Actions.
  • Familiarity with Azure Purview and role-based access control (RBAC).
  • Strong analytical and problem-solving skills.
  • Good communication skills and ability to work collaboratively in a team environment.

Responsibilities

  • Design, develop, and maintain scalable data pipelines using Azure Data Factory, Databricks (PySpark), SQL, and Python.
  • Assist in migrating on-premise and legacy data systems to Azure Data Lake and Azure Synapse Analytics.
  • Optimize data processing workflows for performance, reliability, and cost efficiency.
  • Implement data validation, quality checks, and monitoring for production pipelines.
  • Build and support batch and near real-time data processing solutions using Event Hub and Stream Analytics.
  • Collaborate with data analysts and data scientists to deliver curated and analytics-ready datasets.
  • Translate business requirements into technical data solutions under guidance from senior architects.
  • Implement security, governance, and access control using Azure-native tools and best practices.
  • Support CI/CD pipelines and infrastructure automation using Azure DevOps, GitHub, ARM, or Terraform.
  • Participate in code reviews and contribute to documentation and knowledge sharing.
  • Troubleshoot and resolve data pipeline failures and performance issues.

Skills

ADLS Gen2Apache SparkARM TemplatesAzureAzure Data FactoryAzure DatabricksAzure DevOpsAzure Event HubAzure FunctionsAzure PurviewAzure SQL DatabaseAzure Stream AnalyticsAzure Synapse AnalyticsCI/CD PipelinesGitHubPower BIPythonPySparkSQLTerraform

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free