Skip to content
mimi

AI/ML Engineer

Sunrise

On-site Full-time 1w ago

About the role

About Sunrise

At Sunrise, we think bigger, go further and create new ideas. For us working culture means achieving great things together. It’s where respect and innovative ideas combine with real teamwork – every voice counts, every perspective makes us stronger. Our passion spurs us on to try new things and grow continuously. Sound like you? Then join our success story.

About ADAO (AI, Data and Agentic Enablement Office)

At ADAO (AI, Data and Agentic Enablement Office), we bring people together to unlock the future of Sunrise through data, analytics, AI, and agentic technologies. We think big, move with purpose, and turn ideas into real impact for our customers and teams. ADAO is where innovation meets trust: we build strong data foundations, scale safe and reliable AI capabilities, and empower business units through a hub‑and‑spoke operating model to deliver meaningful results across the company. Within this model, you will work as part of the Agentic Enablement and Prompt Engineering Conversation Design team, where the central hub provides standards, platforms, architecture, and best practices, while cross‑functional, outcome‑driven Pods are embedded in and led by the business. You will act as an engineering expert, partnering closely with the business to deliver on their priorities and outcomes.

In this role you will design, build, deploy and operate production grade ML and GenAI solutions, including RAG components where applicable, that deliver measurable business value for Sunrise, ensuring end‑to‑end lifecycle excellence from experimentation to automated deployment, monitoring, risk controls and continuous improvement in alignment with ADAO standards and governance.

You will be part of the central AI/ML Engineering MLOps chapter within the ADAO Hub, supporting embedded Domain Pods through reusable platform components, engineering standards and hands‑on delivery.

YOUR CHALLENGE:

  • Develop industrialize ML/GenAI solutions: Translate business roadmaps into solution designs while implementing robust training/inference pipelines and production services that meet all non‑functional requirements
    • Co‑design solution architectures with Data Scientists and Domain Pods (batch/real‑time, APIs, RAG patterns where relevant)
    • Industrialize prototypes: automated tests, secure packaging, deployment/rollback, and performance/cost tuning
  • Own evolve MLOps/LLMOps pipelines: Build standardized CI/CD workflows, manage model promotion through dev→test→prod, and maintain monitoring and drift/retraining mechanisms
    • Maintain standard CI/CD/CT templates incl. model/prompt packaging, validation gates, and dev→test→prod promotion
    • Operate model registry/versioning and automated retraining triggers aligned to ADAO governance and CIO/IT change windows
  • Operate experimentation feature management: Provide reproducible experimentation, manage feature store patterns and validation, and promote reusable datasets/features across Pods
    • Provide experiment tracking, dataset/feature versioning and feature‑store patterns (offline/online parity)
    • Run evaluation harnesses for ML GenAI (regression, A/B tests, safety multilingual checks) and share reusable assets
  • Model documentation, governance risk: Produce compliant model documentation, implement required governance/security/privacy controls, and support GenAI safety and auditability measures
    • Produce model cards and compliance artefacts (data sources, limitations, privacy/bias controls) and evidence for audits
    • Implement Responsible AI / GenAI guardrails: grounding, HITL approvals, audit logs, and risk reviews with governance teams
  • Observability, reliability incident response: Implement full observability for models and services, maintain runbooks and alerting, participate in incident resolution, and drive reliability improvements
    • Define SLOs/SLIs and implement monitoring/alerting for quality, drift, latency, availability and cost
    • Maintain runbooks, participate in on‑call/incident postmortems, and drive corrective actions
  • Enablement reusable components: Provide reusable ML/GenAI components, coach teams on MLOps/LLMOps best practices and ensure consistent engineering standards across Hub and Pods
    • Build reusable libraries and reference implementations (RAG connectors, evaluation utilities, deployment scaffolding)
    • Coach Pod engineers through reviews, pairing and clinics; contribute to ADAO AI Academy materials
  • Key deliverables (illustrative):
    • Standard ML/LLMOps pipeline template + reference architectures (batch/real‑time/RAG)
    • Model registry artefact versioning (models, prompts, datasets, features)
    • Monitoring dashboards + drift detection + SLO reporting
    • GenAI evaluation test suite (regression, safety, multilingual) and acceptance thresholds
    • Model documentation pack and operational runbooks (audit‑ready)

YOUR SKILLS:

  • Bachelor’s/Master’s in Computer Science, Data Science,

Skills

AIGenAILLMOpsMLOpsRAG

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free