Skip to content
mimi

Databricks Machine Learning (ML) Administrator

Applied Materials, Inc.

Canada · flexible Full-time Senior 4d ago

About the role

About Applied Materials

Applied Materials is a global leader in materials engineering solutions used to produce virtually every new chip and advanced display in the world. We design, build and service cutting-edge equipment that helps our customers manufacture display and semiconductor chips – the brains of devices we use every day. As the foundation of the global electronics industry, Applied enables the exciting technologies that literally connect our world – like AI and IoT. If you want to push the boundaries of materials science and engineering to create next generation technology, join us to deliver material innovation that changes the world.

What We Offer

Location: Home / Mobile,CAN-ONTARIO-001

You’ll benefit from a supportive work culture that encourages you to learn, develop, and grow your career as you take on challenges and drive innovative solutions for our customers. We empower our team to push the boundaries of what is possible—while learning every day in a supportive leading global company. Visit our Careers website to learn more.

At Applied Materials, we care about the health and wellbeing of our employees. We’re committed to providing programs and support that encourage personal and professional growth and care for you at work, at home, or wherever you may go. Learn more about our .

Role Overview

We are seeking an experienced Databricks Machine Learning (ML) Administrator to own the end‑to‑end administration, governance, and secure operations of our ML environments on Databricks. In this role, you will configure and manage ML compute, enforce access and governance for MLflow assets (experiments and model registry), and ensure reliable model training, deployment, and serving at scale. You will partner closely with Data Engineering, ML Engineering, Security, and FinOps to deliver a robust, compliant, and cost‑efficient ML platform.

Key Responsibilities

Platform Operations & Compute

  • Deploy, configure, and maintain Databricks ML clusters (CPU/GPU), SQL Warehouses, and cluster policies optimized for ML workloads; apply autoscaling, pools, and runtime selection (including Databricks Runtime for ML).
  • Administer Jobs and Pipelines that orchestrate training, evaluation, and batch/real‑time scoring; manage run‑as identities and default privileges to meet least‑privilege requirements.
  • Establish and enforce compute access controls (attach/restart/manage) and workspace object permissions; standardize policies to prevent configuration drift.

ML Lifecycle Governance (MLflow & Serving)

  • Govern MLflow Experiments and Registered Models with fine‑grained permissions (read/edit/manage), standardizing experiment tracking, model versioning, stage transitions, and approvals.
  • Operate and secure model serving endpoints, including permissions for view, query, and manage actions; implement change control for deployments.

Data Access & Unity Catalog Alignment

  • Coordinate with data governance to implement metastore, catalog, schema, and table‑level permissions that support feature engineering, training, and evaluation while safeguarding sensitive data.
  • Apply enterprise identity and access management patterns across account and workspace scopes (users, groups, service principals) using SCIM/SSO standards.

Security, Compliance & Auditability

  • Enforce workspace object ACLs, compute isolation modes, secret handling, and log‑access controls for ML clusters; implement Spark ACL settings per policy.
  • Operationalize system tables/audit logs and usage analytics to meet regulatory and internal control requirements; partner with Security/GRC for periodic reviews.

Reliability, Monitoring & Incident Response

  • Monitor cluster health, job success/failure, serving endpoint SLOs, and capacity; establish alerting and incident runbooks for ML infrastructure.
  • Lead post‑incident reviews and continuous improvement for platform reliability and developer productivity.

Cost Management & FinOps

  • Implement and iterate compute policies, budget policies, and usage dashboards to optimize GPU/CPU consumption for ML training and serving.

Enablement & Best Practices

  • Define and evangelize ML platform standards: environment baselines, cluster policies, experiment hygiene, model promotion flows, and serving change‑management.
  • Partner with ML teams to align platform features (AutoML, Feature/Vector stores, model serving) to use cases and performance targets.

Required Qualifications

  • 5+ years administering Databricks or similar ML/data platforms (e.g., Spark‑based platforms) with hands‑on experience in workspace administration, compute policies, and MLflow governance.
  • Proven expertise managing

Skills

DatabricksGPUMLflowPythonSCIMSparkSSOSQL

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free