Associate Sr Data Engineer
Anblicks
About the role
Experience: 5+ Years
Role
Data Engineer
Core Focus
Data engineering, data warehousing, ETL/ELT, ADF, SSIS, Microsoft Fabric, T-SQL, ADLS, and Azure big data technologies
Job Summary
We are seeking a skilled Data Engineer with 5+ years of experience in data engineering, data warehousing, and ETL development. The ideal candidate should have strong hands-on experience in building, maintaining, and optimizing data pipelines and warehouse solutions. The candidate should have working knowledge of PL/SQL, strong hands-on experience with T-SQL, and solid understanding of Azure Data Factory (ADF), SSIS packages, Microsoft Fabric, Fabric pipelines, Data Lake concepts, ADLS, and other Azure big data technologies. This role requires someone who is strong in end-to-end data engineering, including data ingestion, transformation, warehousing, orchestration, and support of enterprise data solutions.
Key Responsibilities
Design, build, and maintain scalable data pipelines for data ingestion, transformation, and loading.
Develop, enhance, and support ETL/ELT jobs across multiple source and target systems.
Build and support pipelines using Azure Data Factory (ADF), SSIS packages, and Microsoft Fabric pipelines.
Work on data warehouse solutions, including staging, transformation, ODS, and target data models.
Develop and optimize T-SQL queries, stored procedures, functions, and scripts to support data processing and reporting needs.
Analyze source systems and map data into target warehouse and lakehouse structures.
Support data movement across databases, data lakes, cloud platforms, files, and external systems.
Ensure data quality, reconciliation, and consistency across pipelines and data stores.
Troubleshoot issues in ETL jobs, ADF pipelines, SSIS packages, Fabric pipelines, and warehouse processes.
Work with business and technical teams to understand data integration, reporting, and analytics requirements.
Create and maintain technical documentation for data flows, source-to-target mappings, transformation logic, and operational procedures.
Support deployment, testing, and production operations for data engineering solutions.
Follow established development standards, governance processes, and best practices.
Required Skills
5+ years of experience in data engineering, ETL development, or data warehouse development.
Strong understanding of data engineering concepts, including ingestion, transformation, orchestration, and pipeline development.
Strong hands-on experience with data warehousing concepts, including staging, ODS, dimensional modeling, and target data structures.
Strong experience in building and supporting ETL jobs and data integration workflows.
Hands-on experience with Azure Data Factory (ADF), SSIS packages, Microsoft Fabric, and Fabric pipelines.
Strong hands-on experience with T-SQL, including writing, troubleshooting, and optimizing complex queries.
Working knowledge of PL/SQL with the ability to understand and support existing database logic.
Strong understanding of Data Lake architecture, Azure Data Lake Storage (ADLS), and Azure big data technologies.
Experience with data validation, reconciliation, and issue resolution.
Good understanding of batch processing and large-volume data handling.
Strong analytical and problem-solving skills.
Ability to work independently and collaboratively within a team environment.
Nice to Have / Preferred Skills
Knowledge of Salesforce, Git, and CI/CD processes.
Exposure to Databricks and other modern cloud-based data platforms.
Experience with relational databases such as SQL Server, Oracle, or similar platforms.
Knowledge of scheduling and orchestration tools.
Familiarity with API-based, file-based, and database-based integrations.
Experience supporting enterprise data migration or modernization initiatives.
Education
Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field.
Ideal Candidate Profile
Strong in data engineering, data warehousing, and ETL development.
Experienced with ADF, SSIS, Microsoft Fabric, and Fabric pipelines.
Has solid understanding of Data Lake, ADLS, and Azure big data technologies.
Strong in T-SQL and comfortable working with complex data transformations.
Has working knowledge of PL/SQL, but is stronger in broader data engineering implementation.
Comfortable working with large data volumes and enterprise data platforms.
Able to support both development and operational data pipeline needs.
Strong team player with good communication and problem-solving skills.
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free