L
Senior Databricks / Azure Fabric Engineer (Immediate Joiner)
Remote · India Full-time Senior Today
About the role
About
EazyML, Recognized by Gartner, EazyML ( specializes in Responsible AI. Our solutions facilitate proactive compliance and sustainable automation and The company is associated with breakthrough startups like Amelia.ai.
Role
- Full-time Remote role for a Senior Azure Fabric /Databricks Engineer with experience in Microsoft Fabric
- The person can work from home (anywhere in INDIA, job location is in India).
Responsibilities
- In this role, you will architect and build high-performance data engineering solutions , develop robust ETL/ELT pipelines , and lead initiatives across data architecture, feature engineering, and AI-driven data workflows .
- You will work extensively with Databricks, Spark, Delta Lake, Airflow, and Python , while applying modern GenAI tools to improve developer productivity and data quality.
Requirements
- Experience with Azure Fabric, Databricks, Azure Onelake, Spark, and Delta Lake, data architecture, azure Data bricks, Azure Synapse, Data bricks, Pyspark, SQL, Delta Lake concepts, PowerBI, Azure Data Engineering tech stack, pipeline optimization data migrations, Azure Data Factory (ADF), Python, SQL.ETL/ELT strategy, and orchestration , building high-performance pipelines in Python and SQL , and partnering with ML teams.
- This role emphasizes architecture, governance, and best practices , with hands‑on experience with data engineering.
- Experience: 7+ years | CS/IT degree preferred
Key Skills
- Microsoft Fabric
- Databricks
- Spark
- Delta Lake
- Data Architecture
- ETL/ELT
- Python
- SQL
- Azure
- LLMs/NLP
- Azure fabric
- azure onelake
- azure data bricks
- pipeline optimization data migrations
- tarquet file format….
- Azure Data Factory (ADF)
- Pyspark
- Python
- SQL
- Azure Synapse
Requirements
- 7+ years of experience in data engineering
- Experience with Azure Fabric, Databricks, Azure Onelake, Spark, Delta Lake, Azure Synapse, Azure Data Factory (ADF), PySpark, Python, SQL
- Knowledge of ETL/ELT strategy and orchestration
- Familiarity with PowerBI, LLMs/NLP, GenAI tools
- CS/IT degree preferred
Responsibilities
- Architect and build high-performance data engineering solutions
- Develop robust ETL/ELT pipelines
- Lead initiatives across data architecture, feature engineering, and AI-driven data workflows
- Work extensively with Databricks, Spark, Delta Lake, Airflow, and Python
- Apply modern GenAI tools to improve developer productivity and data quality
Skills
Microsoft FabricDatabricksSparkDelta LakeData ArchitectureETL/ELTPythonSQLAzureLLMs/NLPAzure OnelakeAzure Data Brickspipeline optimizationdata migrationsparquet file formatAzure Data FactoryADFPySparkAzure SynapseAirflowGenAI toolsPowerBI
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free