Skip to content
mimi

Lead Data Engineer (Global Security)

0000050007 Royal Bank of Canada

Canada · On-site Full-time Lead 2w ago

About the role

About the Opportunity

At RBC, our data engineering team enhances visibility into assets across the Public Cloud and Application Security landscape. Our mission is to provide clear insights into digital infrastructure, enabling effective identification and management of security risks. We harness industry‑leading tools like Databricks, Python, PySpark, and Tableau, transforming data into strategic assets. Our approach goes beyond traditional security by analyzing complex datasets to generate actionable business insights, thereby strengthening our cyber resilience. Collaboration is key to our success, fostering an innovative environment where team members leverage their narrative and technical skills to drive continuous advancements in cloud security.

Responsibilities

  • Design, develop, and maintain end‑to‑end data pipelines in Azure Databricks using Spark (SQL, PySpark) to transform large datasets efficiently.
  • Develop and optimize ELT/ELT workflows using Databricks Workflows or Apache Airflow ensuring data integrity, quality, and reliability.
  • Design and manage Delta Lake solutions for data versioning, incremental data loads, and efficient data storage.
  • Collaborate with cross‑functional teams to understand data requirements, create robust data models, and deliver actionable insights.
  • Implement Site Reliability Engineering (SRE) practices for data pipelines by building automated monitoring, alerting, and incident management solutions to ensure data reliability, availability, and performance.
  • Apply best practices in data governance, ensuring compliance using Unity Catalog for access management and data lineage tracking.
  • Monitor, troubleshoot, and optimize Spark jobs for performance, addressing data pipelines bottlenecks and ensuring cost efficiency.
  • Implement infrastructure‑as‑code solutions using Terraform for automated resource provisioning and management.
  • Develop and maintain comprehensive documentation for data pipelines, transformations and data models.
  • Provide mentorship and technical guidance to junior engineers, fostering a culture of learning and best practices in data engineering.
  • Lead and mentor a team of data engineers, providing technical guidance and fostering professional development.
  • Oversee the design and implementation of complex data solutions, ensuring alignment with business objectives.
  • Drive the adoption of best practices in data engineering, including code reviews, testing, and documentation.
  • Collaborate with stakeholders to define and prioritize data engineering projects, ensuring timely delivery and high‑quality outcomes.
  • Stay updated on emerging technologies and trends in data engineering, recommending and implementing innovative solutions.

Requirements

  • Must‑have Bachelor’s or master’s degree in computer science, Data Engineering, or a related field.
  • 8+ years of proven experience in data engineering, delivering business‑critical software solutions for large enterprises with a consistent track record of success.
  • Strong expertise in Databricks (Delta Lake, Unity Catalog, Lakehouse Architecture, Table Triggers, Delta Live Pipelines, Databricks Runtime, Cluster management, etc.)
  • Proficiency in Azure Cloud Services.
  • Solid understanding of Spark and PySpark for big data processing.
  • English fluency, verbal and written.
  • Knowledge of SCM, Infrastructure‑as‑code, and CI/CD pipelines.
  • Experience leading and mentoring a team of data engineers.
  • Strong project management skills, with the ability to prioritize tasks and manage multiple projects simultaneously.
  • Excellent communication and collaboration skills, with the ability to work effectively with cross‑functional teams.
  • Experience with Agile methodologies and DevOps practices.

Nice to Have

  • Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Engineer).
  • Exposure to Kubernetes, Docker, and Terraform.
  • Strong understanding of business intelligence and reporting tools.
  • Familiarity with Cyber Security Concepts.

What’s in it for You?

  • We thrive on the challenge to be our best, progressive thinking to keep growing, and working together to deliver trusted advice `to help our clients thrive and communities prosper.
  • We care about each other, reaching our potential, making a difference to our communities, and achieving success that is mutual.
  • A comprehensive Total Rewards Program including bonuses and flexible benefits, competitive compensation, commissions, and stock where applicable.
  • Leaders who support your development through coaching and managing opportunities.
  • Work in a dynamic, collaborative, progressive, and high‑performing team.
  • A world‑class training program in financial services.
  • Flexible work/life balance options.
  • Opportunities to do challenging work.
  • Opportunities to take on progressively greater accountabilities.
  • Opportunities to building close relationships with clients.

Job Skills

  • Big Data Management
  • Cloud Computing
  • Database Development
  • Data Mining
  • Data Warehousing (DW)
  • ETL Processing
  • Group Problem Solving
  • Quality Management
  • Requirements Analysis

Additional Job Details

  • Address: 16 YORK ST:TORONTO
  • City: Toronto
  • Country: Canada
  • Work hours/week: 37.5
  • Employment Type: Full time
  • Platform: TECHNOLOGY AND OPERATIONS
  • Job Type: Regular
  • Pay Type: Salaried
  • Posted Date: 2025-11-25
  • Application Deadline: 2026-04-30
  • Note: Applications will be accepted until 11:59 PM on the day prior to the application deadline date above.

Our Employment Opportunities

At RBC, we are guided by living shared values of Client First, Integrity, Collaboration, Respect and Excellence and winning together as One RBC. We believe an inclusive workplace that has diverse perspectives is core to our continued growth as one of the largest and most successful banks in the world. Maintaining a workplace where our employees feel supported to perform at their best, effectively collaborate, drive innovation, and grow professionally helps to bring our Purpose to life and create value for our clients and communities. RBC strives to deliver this through policies and programs intended to foster a workplace based on respect, belonging and opportunity for all.

Join Our Talent Community

Stay in‑the‑know about great career opportunities at RBC. Sign up and get customized info on our latest jobs, career tips and Recruitment events that matter to you. Expand your limits and create a new future together at RBC. Find out how we use our passion and drive to enhance the well‑being of our clients and communities at jobs.rbc.com.

Royal Bank of Canada is a global financial institution with a purpose‑driven, principles‑led approach to delivering leading performance. Our success comes from the 84,000+ employees who bring our vision, values and strategy to life so we can help our clients thrive and communities prosper. As Canada’s biggest bank, and one of the largest in the world based on market capitalization, we have a diversified business model with a focus on innovation and providing exceptional experiences to more than 16 million clients in Canada, the U.S. and 34 other countries. Learn more at rbc.com.

We are proud to support a broad range of community initiatives through donations, community investments and employee volunteer activities. See how at rbc.com/community‑social‑impact.

#LI-POST #TECHPJ

Requirements

  • Bachelor’s or master’s degree in computer science, Data Engineering, or a related field.
  • 8+ years of proven experience in data engineering, delivering business-critical software solutions for large enterprises with a consistent track record of success.
  • Strong expertise in Databricks (Delta Lake, Unity Catalog, Lakehouse Architecture, Table Triggers, Delta Live Pipelines, Databricks Runtime, Cluster management, etc.)
  • Proficiency in Azure Cloud Services.
  • Solid understanding of Spark and PySpark for big data processing.
  • English fluency, verbal and written.
  • Knowledge of SCM, Infrastructure-as-code, and CI/CD pipelines.
  • Experience leading and mentoring a team of data engineers.
  • Strong project management skills, with the ability to prioritize tasks and manage multiple projects simultaneously.
  • Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams.
  • Experience with Agile methodologies and DevOps practices.

Responsibilities

  • Design, develop, and maintain end-to-end data pipelines in Azure Databricks using Spark (SQL, PySpark) to transform large datasets efficiently.
  • Develop and optimize ELT/ELT workflows using Databricks Workflows or Apache Airflow ensuring data integrity, quality, and reliability.
  • Design and manage Delta Lake solutions for data versioning, incremental data loads, and efficient data storage.
  • Collaborate with cross-functional teams to understand data requirements, create robust data models, and deliver actionable insights.
  • Implement Site Reliability Engineering (SRE) practices for data pipelines by building automated monitoring, alerting, and incident managements solution to ensure data reliability, availability, and performance.
  • Apply best practices in data governance, ensuring compliance using Unity Catalog for access management and data lineage tracking.
  • Monitor, troubleshoot, and optimize Spark jobs for performance, addressing data pipelines bottlenecks and ensuring cost efficiency.
  • Implement infrastructure-as-code solutions using Terraform for automated resource provisioning and management.
  • Develop and maintain comprehensive documentation for data pipelines, transformations and data models.
  • Provide mentorship and technical guidance to junior engineers, fostering a culture of learning and best practices in data engineering.
  • Lead and mentor a team of data engineers, providing technical guidance and fostering professional development.
  • Oversee the design and implementation of complex data solutions, ensuring alignment with business objectives.
  • Drive the adoption of best practices in data engineering, including code reviews, testing, and documentation.
  • Collaborate with stakeholders to define and prioritize data engineering projects, ensuring timely delivery and high-quality outcomes.
  • Stay updated on emerging technologies and trends in data engineering, recommending and implementing innovative solutions.

Benefits

bonusesflexible benefitscompetitive compensationcommissionsstockcoaching and managing opportunitiestraining program

Skills

Apache AirflowAzure DatabricksCI/CDDatabricksDelta LakeDevOpsDockerETLInfrastructure-as-codeKubernetesPythonPySparkSCMSparkSQLTerraformTableauUnity Catalog

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free