Machine Learning Engineer, AI Enablement
Canva
About the role
About the team
Canva's AI Platform Group serves as the foundation for AI innovation across the company. Within this group, the AI Enablement teams provide critical support to researchers and AI builders globally, helping them navigate platform capabilities, access the data they need, and accelerate the journey from experimentation to production. This role sits at the intersection of data engineering and backend development, with a focus on enabling researchers and AI builders to access high-quality data efficiently. The MLE will own data pipelines end‑to‑end, build backend services that expose data to research workflows, and work closely with researchers to understand and solve their data access needs. The role demands someone who finds satisfaction in making others faster, building infrastructure that is reliable, well‑documented, and easy to use. Strong engineering fundamentals and a pragmatic approach to problem‑solving are more important here than ML modelling experience.
Responsibilities
- Design, build, and maintain data pipelines that power AI research and experimentation workflows
- Develop backend services and APIs that give researchers clean, reliable access to data at scale
- Work directly with researchers and AI builders to understand their data needs and translate them into robust engineering solutions
- Improve data infrastructure reliability, observability, and performance
- Debug and resolve data quality issues, identifying root causes and sharing learnings broadly
- Contribute to internal tooling and paved roads that make it easier for the team to work with data consistently
Requirements
- Strong Python expertise with experience building production‑grade data pipelines or backend services
- Experience with data engineering patterns, including batch processing, streaming, data quality, and pipeline orchestration
- Experience working with Kubernetes or similar container orchestration environments
- Solid software engineering fundamentals, including clean APIs, testing, and observability
- Experience supporting or working closely with research, data science, or ML teams
- Strong debugging and problem‑solving skills
- Ability to manage multiple workstreams and communicate clearly with both technical and non‑technical stakeholders
- Collaborative mindset with a genuine focus on enabling others to move faster
Nice to have
- Familiarity with PyTorch or other ML frameworks, with enough understanding to follow what researchers are doing with the data
Benefits
- Equity packages
- Inclusive parental leave policy that supports all parents & carers
- Annual Vibe & Thrive allowance to support wellbeing, social connection, office setup & more
- Flexible leave options that empower you to be a force for good, take time to recharge, and support you personally
Additional information
- Location flexibility: Flagship campus in Sydney, Australia; part of European operations in Austria. You can choose where and how you work.
- Starting salary: EUR 70,000 (market‑informed, region‑specific equity).
- Hiring considerations: Decisions are based on skills, experience, and cultural fit. Candidates are asked to share pronouns and any reasonable interview adjustments needed.
- Interview format: Conducted virtually.
Requirements
- Strong Python expertise with experience building production-grade data pipelines or backend services
- Experience with data engineering patterns, including batch processing, streaming, data quality, and pipeline orchestration
- Experience working with Kubernetes or similar container orchestration environments
- Solid software engineering fundamentals, including clean APIs, testing, and observability
- Experience supporting or working closely with research, data science, or ML teams
- Strong debugging and problem-solving skills
- Ability to manage multiple workstreams and communicate clearly with both technical and non-technical stakeholders
- Collaborative mindset with a genuine focus on enabling others to move faster
Responsibilities
- Design, build, and maintain data pipelines that power AI research and experimentation workflows
- Develop backend services and APIs that give researchers clean, reliable access to data at scale
- Work directly with researchers and AI builders to understand their data needs and translate them into robust engineering solutions
- Improve data infrastructure reliability, observability, and performance
- Debug and resolve data quality issues, identifying root causes and sharing learnings broadly
- Contribute to internal tooling and paved roads that make it easier for the team to work with data consistently
Benefits
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free