Skip to content
mimi

Data Engineer Microsoft

AgileEngine

Bhopal · Hybrid Full-time Senior Today

About the role

Data Engineer (Senior) – Mumbai / Pune / Bangalore

Hybrid Opportunity | 6-8 Years Experience | Financial Data & Microsoft Fabric

We’re looking for a strong Data Engineer to join a globally strategic data modernisation programme at one of the world’s leading investment intelligence firms. You’ll design, build and maintain state‑of‑the‑art data pipelines on Microsoft Fabric as part of a platform that powers investment decision tools used across the globe. This is a high ownership, high impact role — not just another pipeline job.


Must‑Have Skills

  • 6-8 years of hands‑on data engineering experience
  • Strong Python programming — pipelines, transformation logic and automation
  • Proficient in SQL — window functions, partitioning and time‑series query patterns
  • Hands‑on experience with Microsoft Fabric — OneLake, Fabric Data Factory, Lakehouse and Warehouse
  • Working knowledge of Delta Lake — incremental merges, Z‑ordering and Change Data Feed
  • Familiarity with Azure cloud technologies — ADF, Azure SQL, Key Vault and RBAC
  • REST API experience — consuming external vendor APIs and building service integrations
  • Git‑based collaboration — branching strategies, PR workflows and pipeline‑as‑code
  • AI assisted development tools — GitHub Copilot, Cursor or equivalent
  • Strong sense of ownership across ingestion, QA, correction management and audit trails
  • Excellent communication skills — you’ll work with global cross‑functional teams across engineering, compliance and business

Key Responsibilities

  • Build and maintain scalable distributed data pipelines on Microsoft Fabric including OneLake lakehouse layers and Delta Lake merge workflows
  • Design and implement bitemporal data models to support certified regulatory‑grade time‑series datasets
  • Build and maintain software testing frameworks — unit, non‑regression and user acceptance — for pipelines and transformation logic
  • Acquire, normalise, transform and release large volumes of financial market data
  • Support AI solution integration including AI‑assisted ingestion, anomaly detection and semantic search over the lakehouse
  • Collaborate actively with stakeholders across data engineering, compliance and business teams globally
  • Contribute to shared platform services — this is a platform role, not a vertical‑specific one

Good to Have

  • Experience with pandas, PySpark or equivalent data manipulation libraries
  • Familiarity with Microsoft Purview for data lineage, cataloguing and sensitivity classification
  • Understanding of bitemporal data modelling for financial and regulatory datasets
  • Knowledge of financial reference data — equities, fixed income, corporate actions or index composition
  • Exposure to CI/CD pipelines and automated environment provisioning
  • Experience with LLMs and Agentic AI — anomaly detection, semantic search or natural language querying over structured data is a strong plus

Application Details

Interested candidates, please share:

  1. Email ID
  2. Relevant Experience
  3. CCTC / ECTC
  4. Notice Period

⚠️ Please apply only if your experience aligns with the requirements. Candidates with Microsoft Fabric and financial data experience will be prioritised.

Requirements

  • 6-8 years of hands-on data engineering experience
  • Strong Python programming — pipelines, transformation logic and automation
  • Proficient in SQL — window functions, partitioning and time-series query patterns
  • Hands-on experience with Microsoft Fabric — OneLake, Fabric Data Factory, Lakehouse and Warehouse
  • Working knowledge of Delta Lake — incremental merges, Z-ordering and Change Data Feed
  • Familiarity with Azure cloud technologies — ADF, Azure SQL, Key Vault and RBAC
  • REST API experience — consuming external vendor APIs and building service integrations
  • Git based collaboration — branching strategies, PR workflows and pipeline-as-code
  • AI assisted development tools — GitHub Copilot, Cursor or equivalent
  • Strong sense of ownership across ingestion, QA, correction management and audit trails
  • Excellent communication skills — you'll work with global cross functional teams across engineering, compliance and business

Responsibilities

  • Build and maintain scalable distributed data pipelines on Microsoft Fabric including OneLake lakehouse layers and Delta Lake merge workflows
  • Design and implement bitemporal data models to support certified regulatory grade time-series datasets
  • Build and maintain software testing frameworks — unit, non-regression and user acceptance — for pipelines and transformation logic
  • Acquire, normalise, transform and release large volumes of financial market data
  • Support AI solution integration including AI assisted ingestion, anomaly detection and semantic search over the lakehouse
  • Collaborate actively with stakeholders across data engineering, compliance and business teams globally
  • Contribute to shared platform services — this is a platform role, not a vertical specific one

Skills

ADFAzure SQLAzure cloudChange Data FeedCursorDelta LakeFabric Data FactoryGitGitHub CopilotKey VaultLakehouseMicrosoft FabricOneLakePythonRBACREST APISQLWarehouse

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free