Skip to content
mimi

Sr. Data Engineer - Microsoft Fabric

LinkedIn

Jaipur · On-site Full-time Senior Today

About the role

Sr. Data Engineer

RGP is seeking a highly skilled and experienced Microsoft Fabric Senior Data Engineer to join our Data and Analytics team. The ideal candidate will have deep knowledge of modern cloud-based data structures and hands-on experience within Microsoft Fabric. You will help design, build, and optimize scalable data solutions across the Fabric/Azure ecosystem – including bronze ingestion, silver and gold transformation layers, Lakehouse design, and support for Power Bi and AI enablement – as we transition from an on‑premises SQL Environment to Microsoft Fabric

Key Responsibilities

  • Architect and develop data ingestion from APIs, SaaS platforms, on‑premises systems, and other sources using Fabric Pipelines & Notebooks/PySpark with reusable data transformation frameworks for our ETL/ELT processes.
  • Build and optimize Lakehouses, Delta tables, and Power BI semantic models to support enterprise analytics and reporting.
  • Monitor and fine‑tune performance, storage layout, and capacity usage for cost efficiency, scalability, and reliability.
  • Implement data quality frameworks, validation rules, automated testing and enhancements to CI/CD, Git, and DevOps processes.
  • Contribute to data governance, lineage, cataloging, and metadata management initiatives.
  • Collaborate with analysts, data scientists, and business teams to deliver analytics‑ready datasets and models.
  • Mentor junior engineers and help define best practices in data engineering and architecture.
  • Provide ongoing development and support for the legacy on‑premises SQL/SSIS/SSAS/SSRS environment during this transition and decommissioning.

Required Qualifications

  • Bachelor’s or Master’s degree in computer science, Engineering, or a related field.
  • 10+ years of experience in data engineering or a similar role.
  • Expert‑level proficiency in SQL, data transformations, and performance tuning.
  • Extensive experience with Python, PySpark, or Spark‑based distributed processing technologies.
  • Proven expertise with Microsoft Fabric, Azure Data Factory, Synapse and/or Azure Data Services.
  • Strong understanding of ETL/ELT architecture across diverse data sources and consolidations.
  • Experience with CI/CD, DevOps practices, Git and automated testing frameworks and their setup with data warehouse environments.
  • Experience with modern data visualization tools.
  • Strong problem‑solving skills within a scalable data solution.

Preferred Qualifications

  • Advanced experience with Power BI modeling, DAX, and enterprise semantic models.
  • Previous experience integrating data from Workday, Salesforce, on‑premise SQL, and other enterprise systems.
  • Certifications such as DP‑700, Fabric certifications, or related cloud/data credentials.
  • Familiarity with Microsoft Purview or other Fabric governance capabilities.
  • Ensure compliance with regulatory, privacy, and retention requirements.
  • Managing overall data security including row‑level.
  • Experience with real‑time analytics (Event Stream, KQL databases, streaming ingestion).

Requirements

  • Bachelor’s or Master’s degree in computer science, Engineering, or a related field.
  • Expert-level proficiency in SQL, data transformations, and performance tuning.
  • Extensive experience with Python, PySpark, or Spark-based distributed processing technologies.
  • Proven expertise with Microsoft Fabric, Azure Data Factory, Synapse and/or Azure Data Services.
  • Strong understanding of ETL/ELT architecture across diverse data sources and consolidations.
  • Experience with CI/CD, DevOps practices, Git and automated testing frameworks and their setup with data warehouse environments.
  • Experience with modern data visualization tools.
  • Strong problem-solving skills within a scalable data solution.

Responsibilities

  • Architect and develop data ingestion from APIs, SaaS platforms, on-premises systems, and other sources using Fabric Pipelines & Notebooks/PySpark with reusable data transformation frameworks for our ETL/ELT processes.
  • Build and optimize Lakehouses, Delta tables, and Power BI semantic models to support enterprise analytics and reporting.
  • Monitor and fine-tune performance, storage layout, and capacity usage for cost efficiency, scalability, and reliability.
  • Implement data quality frameworks, validation rules, automated testing and enhancements to CI/CD, Git, and DevOps processes.
  • Contribute to data governance, lineage, cataloging, and metadata management initiatives.
  • Collaborate with analysts, data scientists, and business teams to deliver analytics-ready datasets and models.
  • Mentor junior engineers and help define best practices in data engineering and architecture.
  • Provide ongoing development and support for the legacy on-premises SQL/SSIS/SSAS/SSRS environment during this transition and decommissioning.

Skills

Azure Data FactoryAzure Data ServicesCI/CDDevOpsETLGitLakehouseMicrosoft FabricPower BIPythonPySparkSQLSSASSSISSSRSSynapse

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free