Skip to content
mimi

Senior Systems Engineer (AI Cloud Infrastructure)

Multiverse Computing LLC

München · On-site Full-time Senior 1w ago

About the role

About Multiverse Computing

Multiverse is a well‑funded, fast‑growing deep‑tech company founded in 2019. We are the largest quantum software company in the EU and have been recognized by CB Insights (2023 and 2025) as one of the 100 most promising AI companies in the world.

With 180+ employees and growing, our team is fully multicultural and international. We deliver hyper‑efficient software for companies seeking a competitive edge through quantum computing and artificial intelligence.

Our flagship products, Compactif AI and Singularity, address critical needs across various industries:

  • Compactif AI is a groundbreaking compression tool for foundational AI models based on Tensor Networks. It enables the compression of large AI systems—such as language models—to make them significantly more efficient and portable.
  • Singularity is a quantum‑ and quantum‑inspired optimization platform used by blue‑chip companies to solve complex problems in finance, energy, manufacturing, and beyond. It integrates seamlessly with existing systems and delivers immediate performance gains on classical and quantum hardware.

You’ll be working alongside world‑leading experts to develop solutions that tackle real‑world challenges. We’re looking for passionate individuals eager to grow in an ethics‑driven environment that values sustainability and diversity.

We’re committed to building a truly inclusive culture—come and join us.

Role description

We are looking for a Senior Engineer to lead a critical initiative within our Platform Engineering team: building the software layer for AI Gigafactory. In this role, you will move beyond consuming public cloud resources to architecting and building a private "Neo‑cloud" from the ground up. You will design the control planes that manage high‑performance compute clusters, orchestrate thousands of GPUs, and optimize the hardware‑software interface for massive AI workloads.

This role sits at the intersection of High‑Performance Computing (HPC), Kubernetes Internals, and Bare Metal Engineering.

What you will be doing

  • Building the Control Plane: Designing and developing the software layer (APIs, Controllers, Agents) that automates the lifecycle of bare‑metal AI infrastructure.
  • Orchestrating High‑Scale Compute: Architecting scheduling solutions for large‑scale distributed training jobs across massive clusters of GPUs (NVIDIA H200/B200/B300), ensuring efficient bin‑packing and gang scheduling.
  • Optimizing the Fabric: Tuning the software‑defined networking layer to support low‑latency interconnects (Infini Band/RDMA/RoCEv2) essential for multi‑node training.
  • Developing Kubernetes Extensions: Writing custom Kubernetes Operators and CRDs to abstract complex hardware realities (topology awareness, GPU partitioning) into usable interfaces for our Data Scientists.
  • Hardware‑Level Debugging: Investigating and resolving deep systems issues, ranging from PCIe bus errors and NCCL communication timeouts to kernel panics on bare‑metal nodes.
  • Defining Standards: Creating the "Golden Image" for AI workloads, managing drivers, firmware, and OS optimizations to squeeze maximum performance out of the hardware.

Requirements

  • Software Engineering Expertise: 10+ years of software engineering experience. Strong proficiency in Python is a must. Experience with Go (Golang) is a plus. You must be comfortable building system agents, APIs, and CLI tools.
  • Open Stack Expertise: Deep architectural and operational knowledge of Open Stack. You must be proficient with core components (Nova, Neutron, Keystone, Glance) and understand how to manage large‑scale deployments.
  • Deep Kubernetes Knowledge: You understand K8s internals beyond simple deployment. Experience with Custom Resource Definitions (CRDs), Operators, and the Kubernetes API server architecture.
  • GPU Ecosystem Experience: Hands‑on experience managing NVIDIA GPU clusters. Familiarity with NVIDIA drivers, CUDA toolkit, and the container runtime (NVIDIA Container Toolkit).
  • Linux Internals: Deep understanding of the Linux kernel, cgroups, name spaces, and system performance tuning.
  • Infrastructure as Code: Mastery of declarative infrastructure tools (Terraform, Ansible) but with a focus on…

Skills

AnsibleAPICUDAGoInfiniBandKubernetesLinuxNCCLNVIDIAOpenStackPythonRDMARoCEv2Terraform

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free