Skip to content
mimi

Staff AI/ML Engineer

Royal Bank of Canada

Calgary · On-site Full-time Lead Today

About the role

What's the opportunity?

We're looking for a seasoned Staff AI/ML Engineer to join the RBC Borealis AI Platform team. In this role you will own the end-to-end lifecycle of machine learning systems—from experimentation and validation all the way to high-throughput production serving. You will be the technical anchor for model operationalization at scale, setting the bar for reliability, observability, and engineering excellence across our AI platform. This is a rare opportunity to shape the foundation on which Canada's largest financial institution runs its most critical AI workloads.

At RBC Borealis, you’ll be joining a team that works directly with leading researchers in machine learning, has access to rich and massive datasets, and offers the computational resources to support ongoing development in areas such as reinforcement learning, unsupervised learning and computer vision. You can find out more about our research areas at rbcborealis.com.

Your responsibilities include:

  • Designing, building, and operating scalable ML model-serving infrastructure using SageMaker, MLflow, or equivalent platforms, ensuring low-latency, high-throughput inference in production—without involvement in upstream model training.
  • Architecting and maintaining real-time data and feature pipelines using Kafka and streaming frameworks to support online model serving and event-driven ML workflows.
  • Developing and maintaining robust backend services in Python that expose ML capabilities to downstream consumers via reliable, well-documented APIs.
  • Owning containerized deployment of ML workloads on OpenShift Container Platform (OCP4) / Kubernetes, including resource optimization, autoscaling, and rollout strategies.
  • Building and maintaining CI/CD pipelines (GitHub Actions) for model validation, packaging, and deployment, embedding quality gates and automated testing throughout.
  • Instrumenting ML services with comprehensive observability—metrics, logs, and traces—using Datadog, Dynatrace, Prometheus, or equivalent tooling; driving incident response and blameless post-mortems

You're our ideal candidate if you have:

  • Strong, production-proven experience with ML model serving and lifecycle management using SageMaker, MLflow, or comparable platforms.
  • Expert-level Python skills for backend service development, ML pipeline engineering, and automation scripting.
  • Deep hands-on experience with Apache Kafka and streaming/event-driven architectures for real-time feature pipelines and model inference.
  • In-depth knowledge of OpenShift Container Platform (OCP4) / Kubernetes for deploying and operating containerized ML workloads.
  • Proven experience building and maintaining CI/CD pipelines with GitHub Actions or equivalent tools for ML model delivery.
  • Hands-on expertise with observability platforms such as Datadog, Dynatrace, or Prometheus applied to distributed ML systems.
  • Demonstrated ability to design scalable distributed backend systems that operate reliably under high load in hybrid cloud environments (AWS / Azure / on-prem).
  • Experience with site reliability practices: SLOs/SLIs, alerting, incident management, and capacity planning for ML services.

Nice to have:

  • Proficiency with MongoDB in production environments for storing model metadata, feature stores, or application state.
  • Experience with Elasticsearch for log aggregation, search, and ML-adjacent analytics use cases.
  • Familiarity with JavaScript or Go for building lightweight platform tooling or internal developer portals.
  • Background in audio processing pipelines—speech recognition, audio feature extraction, or real-time audio streaming—for multimodal AI applications.
  • Exposure to agentic AI systems, LLM orchestration frameworks, or self-hosted large language model infrastructure.

What's in it for you?

  • Become part of a team that thinks progressively and works collaboratively. We care about seeing each other reach full potential;
  • A comprehensive Total Rewards Program including bonuses and flexible benefits, competitive compensation, commissions, and stock options where applicable;
  • Leaders who support your development through coaching and managing opportunities;
  • Ability to make a difference and lasting impact from a local-to-global scale.

About RBC Borealis

RBC Borealis is the driving force behind Royal Bank of Canada’s AI and data innovation. As part of Canada’s largest financial institution, we bring together a team of architects, engineers, scientists, and product experts on a mission to revolutionize finance through world-class research, solutions, and a resilient data platform. With locations across Toronto, Waterloo, Montreal, Calgary, and Vancouver, we’re at the forefront of AI research and platform development. With a focus on cutting-edge research in areas like time series forecasting, causal machine learning, and responsible AI, we are seamlessly integrating AI research and data engineering, to solve critical challenges in the financial industry. We are building intelligent, and scalable, data-driven solutions that will help communities thrive and drive innovation for our customers across the bank.

Inclusion and Equal Opportunity Employment

RBC is an equal opportunity employer committed to diversity and inclusion. We are pleased to consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, protected veterans status, Aboriginal/Native American status or any other legally-protected factors. Disability-related accommodations during the application process are available upon request.

Skills

Amazon SageMakerApache KafkaAutoscalingCI/CDDatadogDeep LearningDynatraceKubernetesMachine LearningMLflowMongoDBOpenShiftPrometheusPython

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free