Skip to content
mimi

Azure Data Engineer

Business Management Associates, Inc.

Frederick · On-site Full-time Senior Today

About the role

Job Summary

We are seeking an experienced Azure Data Engineer to design, build, and optimize scalable data pipelines and lake house architectures within the Azure ecosystem. This role is responsible for developing end-to-end data ingestion, transformation, and governance frameworks using Azure Data Factory, Databricks, and ADLS Gen2 to support enterprise analytics and reporting needs.

The ideal candidate will have hands-on experience managing batch and near real-time data ingestion from diverse structured and unstructured sources, implementing Spark-based transformation processes across Bronze, Silver, and Gold data layers, and establishing robust data quality, security, and monitoring frameworks. This position also requires expertise in data governance, automation, and cost optimization to ensure high-performing, secure, and reliable data platforms aligned with organizational SLAs.

Key Responsibilities

Azure Data Engineering & Lakehouse Architecture

  • Design and manage scalable data ingestion pipelines using Azure Data Factory and Azure Service Bus for batch and near real-time processing
  • Implement Bronze, Silver, and Gold data layer transformations using Databricks (PySpark, SparkSQL) within a lakehouse architecture
  • Integrate structured, semi-structured, and unstructured data sources using formats such as Parquet, Avro, and JSON into ADLS Gen2

Data Governance, Security & Quality Management

  • Establish data quality frameworks and validation processes using Databricks and Azure Data Factory
  • Configure RBAC and ACL-based access controls to secure sensitive datasets and ensure compliance
  • Manage credential security with Azure Key Vault and enforce governance standards across data platforms.

Automation, Monitoring & Cloud Optimization

  • Automate pipeline orchestration using Databricks Workflows and CI/CD tools such as Azure DevOps, GitHub Actions, and Terraform

  • Monitor pipeline health and performance using Azure Monitor, logging, and alerting to meet SLA requirements

  • Optimize ADLS storage performance and manage cloud costs using Azure Cost Management best practices

  • All other duties assigned

Required Skills:

  • Azure Data Factory for ETL/ELT operations.
  • Build and support self-service BI solutions using Azure Databricks and the Lakehouse architecture
  • Develop data pipelines and transformations using PySpark and Spark SQL
  • Enable analytics using Databricks AI/BI features and Genie for business users
  • Apply ML techniques for data insights, predictions, and feature engineering
  • Collaborate with analytics and business teams to deliver scalable, governed BI and ML solutions
  • Databricks (PySpark, SparkSQL) for data transformations.
  • Azure Data Lake Storage (ADLS Gen2).
  • SQL Server, PostgreSQL, Cosmos DB, CRM, and ERP systems.
  • Data Governance, RBAC, and ACLs for managing permissions.
  • CI/CD tools: Azure DevOps, GitHub Actions, Terraform.
  • Azure Monitoring, Alerting, and Logging tools.

Qualifications:

Education:

  • Bachelor's degree in Computer Science, Information Systems, Data Science, or a related field (Master's preferred).

  • Minimum 7-9 years experience

  • Substantial professional experience in a related field may be considered in lieu of formal degree.

Skills

ACLsADLS Gen2Azure Data FactoryAzure Data Lake StorageAzure DevOpsAzure Key VaultAzure MonitorAzure Service BusCI/CDCosmos DBCRMDatabricksERPETLGitHub ActionsMLPostgreSQLPySparkRBACSpark SQLSQL ServerTerraform

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free