D
Data Engineer (Azure / Databricks / ADF)
DBiz.ai
India · On-site Full-time Senior Yesterday
About the role
Role Overview
We are seeking a Data Engineer with strong expertise in ETL development using Python and deep experience across Azure data services. The role involves building scalable data pipelines, integrating diverse data sources (including APIs), and designing analytics-ready data models using Databricks and Azure-native tools.
Key Responsibilities
- Design, develop, and maintain ETL/ELT pipelines using Python and Azure Data Factory (ADF)
- Build and optimize data pipelines using Azure Databricks (PySpark)
- Develop ingestion frameworks for API-based, batch, and file-based data sources
- Implement end-to-end data pipelines on Azure leveraging multiple data services
- Design and implement data models (star schema, dimensional models) for analytics, medallion architecture
- Ensure data
Requirements
- Strong hands-on experience with Azure Data Factory (ADF) – pipeline orchestration, data movement, triggers
- Strong hands-on experience with Azure Databricks – PySpark development, Delta Lake, performance tuning
- Strong hands-on experience with Azure Data Lake Storage (ADLS Gen2) – data storage, partitioning Strategies
- Experience with Azure Key Vault – secrets and credential management
- Experience with Azure Active Directory (AAD) – access control and authentication
- Experience with Azure Monitor / Log Analytics – pipeline monitoring and logging
- Strong proficiency in Python (data processing, APIs, ETL frameworks)
- Hands-on experience with Databricks (PySpark)
- Strong knowledge of Azure data ecosystem and architecture patterns
- Experience with API-based data extraction and integration
- Good understanding of data structures and data modeling concepts
- Strong foundation in data warehousing (DWH) concepts
- Experience working with large-scale data processing systems
- Ability to work independently and deliver end-to-end solutions
Responsibilities
- Design, develop, and maintain ETL/ELT pipelines using Python and Azure Data Factory (ADF)
- Build and optimize data pipelines using Azure Databricks (PySpark)
- Develop ingestion frameworks for API-based, batch, and file-based data sources
- Implement end-to-end data pipelines on Azure leveraging multiple data services
- Design and implement data models (star schema, dimensional models) for analytics, medallion architecture
- Ensure data quality, validation, and monitoring across pipelines
- Optimize pipeline performance and cost within Azure ecosystem
- Work independently to troubleshoot and resolve data engineering issues
Skills
ADFAADAPIAzureAzure Active DirectoryAzure Data FactoryAzure Data Lake StorageAzure DatabricksAzure Key VaultAzure MonitorDatabricksDelta LakeETLLog AnalyticsMedallion architecturePythonPySparkStar schemaData warehousing
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free