Data Engineer-NO C2C
The Doyle Group
About the role
Sr. Data Engineer
About
The Doyle Group is a proven partner for Placement and Consulting services, headquartered in Denver, CO. Our core mission is to forge genuine partnerships with our clients who seek strategic talent solutions and to assist highly skilled candidates looking for their next career opportunity. With over 30 years of industry experience, our consultative approach allows us to provide a higher level of guidance and insight, empowering our clients to secure top IT talent that fits seamlessly into their team and culture. We look forward to collaborating to help you achieve your career goals.
Position Summary
Our client is expanding its enterprise data capabilities and investing in a modern analytics foundation to support regulatory, financial, and operational reporting across the business. The Senior Data Engineer will serve as a senior individual contributor responsible for building reliable, secure, and scalable data pipelines and curated warehouse datasets on Azure + Databricks.
This role is ideal for a hands‑on engineer who enjoys modernizing legacy environments, setting engineering standards, and raising the bar on platform reliability through monitoring, alerting, and automation. The Sr. Data Engineer will partner closely with BI/analytics teammates; most business‑facing requirements intake will flow through BI rather than directly through this role.
Note: This role is not open to sponsorship.
Responsibilities
- Design, build, and enhance scalable data pipelines and curated warehouse datasets on Azure + Databricks to support enterprise analytics and reporting.
- Support modernization efforts by translating and migrating existing on‑prem SQL Server warehouse logic into a cloud‑based Databricks lakehouse environment.
- Implement and maintain Medallion (Bronze/Silver/Gold) patterns and contribute to a high‑quality dimensional / star‑schema consumption layer.
- Develop reliable orchestration workflows and job scheduling patterns; manage dependencies and improve pipeline performance and resilience.
- Establish and improve observability (logging, monitoring, alerting) and participate in incident response, root‑cause analysis, and remediation of recurring data issues.
- Apply secure data engineering practices, including role‑appropriate access patterns and protection of sensitive data.
- Partner with BI/analytics teammates to deliver trusted, well‑documented datasets (with most requirements intake flowing through BI/analytics stakeholders).
- Promote engineering standards across the team (version control, code review, testing, documentation, deployment discipline).
- Provide mentorship and technical guidance to other engineers and analysts through design reviews, code reviews, and knowledge sharing.
- Document pipeline designs, data models, and reusable patterns to ensure consistency and continuity.
Minimum Experience / Requirements
- 5+ years of experience in data engineering, data warehousing, or a closely related role.
- Advanced SQL skills, including building and optimizing transformations for analytics use cases.
- Demonstrated experience with dimensional modeling (Kimball/star schema) and building curated reporting datasets.
- Hands‑on experience with Databricks (or a comparable distributed data platform) and modern lakehouse/warehouse patterns.
- Experience building production‑grade pipelines with strong operational ownership (monitoring/alerting, troubleshooting, reliability).
- Working knowledge of secure data handling practices (access controls, credential management, data protection).
- Strong communication and collaboration skills with both technical and business‑facing partners.
- Bachelor’s degree in a related technical field or equivalent practical experience.
- Ability to work a hybrid schedule (3 days/week onsite in the Baltimore‑area).
- Additional Plus:
- Azure services experience such as Azure Data Factory, ADLS, Delta Lake, and related platform tooling.
- Python experience for transformations, automation, or data engineering tooling.
- Exposure to BI/analytics platforms such as Power BI (semantic models, curated datasets, downstream enablement).
- Experience migrating from legacy/on‑prem environments to cloud platforms.
- Experience in regulated industries or environments with elevated governance/security expectations.
Salary / Wage Range
- $110,880 – $177,409 / Year
- Compensation for the role will depend on a number of factors, including a candidate’s qualifications, skills, competencies and experience and may fall outside of the range shown.
- The Doyle Group offers a competitive rewards package, which includes a 401k, healthcare coverage and other benefits.
Requirements
- Minimum Experience5+ years of experience in data engineering, data warehousing, or a closely related role
- Demonstrated experience with dimensional modeling (Kimball/star schema) and building curated reporting datasets
- Hands-on experience with Databricks (or a comparable distributed data platform) and modern lakehouse/warehouse patterns
- Experience building production-grade pipelines with strong operational ownership (monitoring/alerting, troubleshooting, reliability)
- Working knowledge of secure data handling practices (access controls, credential management, data protection)
- Strong communication and collaboration skills with both technical and business-facing partners
- Bachelor’s degree in a related technical field or equivalent practical experience
- Ability to work a hybrid schedule (3 days/week onsite in the Baltimore-area)
- Python experience for transformations, automation, or data engineering tooling
- Experience in regulated industries or environments with elevated governance/security expectations
Responsibilities
- Our client is expanding its enterprise data capabilities and investing in a modern analytics foundation to support regulatory, financial, and operational reporting across the business
- The Senior Data Engineer will serve as a senior individual contributor responsible for building reliable, secure, and scalable data pipelines and curated warehouse datasets on Azure + Databricks
- This role is ideal for a hands-on engineer who enjoys modernizing legacy environments, setting engineering standards, and raising the bar on platform reliability through monitoring, alerting, and automation
- Data Engineer will partner closely with BI/analytics teammates; most business-facing requirements intake will flow through BI rather than directly through this role
- Design, build, and enhance scalable data pipelines and curated warehouse datasets on Azure + Databricks to support enterprise analytics and reporting
- Support modernization efforts by translating and migrating existing on-prem SQL Server warehouse logic into a cloud-based Databricks lakehouse environment
- Implement and maintain Medallion (Bronze/Silver/Gold) patterns and contribute to a high-quality dimensional / star-schema consumption layer
- Develop reliable orchestration workflows and job scheduling patterns; manage dependencies and improve pipeline performance and resilience
- Establish and improve observability (logging, monitoring, alerting) and participate in incident response, root-cause analysis, and remediation of recurring data issues
- Apply secure data engineering practices, including role-appropriate access patterns and protection of sensitive data
- Partner with BI/analytics teammates to deliver trusted, well-documented datasets (with most requirements intake flowing through BI/analytics stakeholders)
- Promote engineering standards across the team (version control, code review, testing, documentation, deployment discipline)
- Provide mentorship and technical guidance to other engineers and analysts through design reviews, code reviews, and knowledge sharing
- Document pipeline designs, data models, and reusable patterns to ensure consistency and continuity
- Advanced SQL skills, including building and optimizing transformations for analytics use cases
- Additional Plus:Azure services experience such as Azure Data Factory, ADLS, Delta Lake, and related platform tooling
- Exposure to BI/analytics platforms such as Power BI (semantic models, curated datasets, downstream enablement)
- Experience migrating from legacy/on-prem environments to cloud platforms
Benefits
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free