ML Research Intern
Lyceum
About the role
ML Research Intern (PhD) – Runtime Prediction and automated GPU Selection
About Lyceum
Lyceum is building a user‑centric GPU cloud from the ground up. Our mission is to make high‑performance computing seamless, accessible, and tailored to the needs of modern AI and ML workloads. We’re not just deploying infrastructure, we’re designing and building our own large‑scale GPU clusters from scratch. If you’ve ever wanted to help shape a cloud platform from day one, this is your moment.
The Role
You’ll join our R&D team as a PhD‑level research intern working on runtime prediction, automated hardware selection, and workload efficiency.
You will help design and run experiments, develop models that predict resource requirements and training runtimes, and contribute to integrating them into our scheduling layer to automate hardware selection and cost prediction for customers. The role is a strong fit if you are currently pursuing a PhD in ML/AI and are looking for an applied research internship.
What we are working on
- Automated GPU selection based on model, dataset, and constraints
- Benchmarking across LLMs, vision & multimodal models
- Throughput, latency & stability optimisation at scale
- Reference pipelines, reproducible evaluation suites
- Practical docs, baselines, and performance guidance
What We’re Looking For
- Currently enrolled in a PhD in ML/AI/CS (or closely related field)
- Strong fundamentals in deep learning, optimisation, and model evaluation
- Experience from research projects, a lab, or prior industry collaborations (Research Engineer/Scientist/Intern)
- Interest and ideally first experience in model efficiency or GPU performance (quantization, pruning, large‑scale training, profiling, systems for ML)
- Ownership mindset and rigor in experimentation (clear hypotheses, robust evaluation, ablations)
- Clear writing; reproducible results (code, reports, or papers)
- Right to work in Switzerland (e.g., Swiss/EU passport, enrolled at Swiss university)
- Tech stack: Python, PyTorch/JAX (and/or TensorFlow). CUDA/GPU literacy is a plus.
Bonus Points
- Work on large‑scale or distributed training systems
- Experience with evaluation design, dataset curation, or benchmarking frameworks
- Publications, preprints, or high‑quality open‑source projects relevant to ML systems, runtime prediction, or scheduling
Why Join Us
- Build from zero: This is a rare opportunity to join a startup at the earliest stages and help shape core building blocks of our platform. Your work can directly inform our production scheduling system and future research directions.
- Hard, meaningful problems: We’re tackling some of the most interesting challenges at the intersection of ML, systems, and hardware – runtime prediction, automated hardware selection, and performance optimisation on cutting‑edge GPUs.
- World‑class hardware: You’ll work directly with state‑of‑the‑art GPU hardware and help build one of the most performant compute platforms in Europe.
- Everything else: Competitive internship compensation, close mentorship, potential for ongoing collaboration (e.g., future projects/publications), and a team that cares about helping you do your best research.
Lyceum is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
#J-18808-Ljbffr
Requirements
- Currently enrolled in a PhD in ML/AI/CS (or closely related field)
- Strong fundamentals in deep learning, optimisation, and model evaluation
- Experience from research projects, a lab, or prior industry collaborations (Research Engineer/Scientist/Intern)
- Interest and ideally first experience in model efficiency or GPU performance (quantization, pruning, large‑scale training, profiling, systems for ML)
- Ownership mindset and rigor in experimentation (clear hypotheses, robust evaluation, ablations)
- Clear writing; reproducible results (code, reports, or papers)
- right to work in Switzerland (eg Swiss/EU passport, enrolled at swiss university)
Responsibilities
- Help design and run experiments.
- Develop models that predict resource requirements and training runtimes.
- Contribute to integrating them into our scheduling layer to automate hardware selection and cost prediction for customers.
Benefits
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free