AI Inference Engineer QVAC
Jobgether
About the role
About
This position offers a unique opportunity to work at the cutting edge of on-device AI, building the core systems that power fast, private, and reliable inference on real-world hardware. You will operate close to the metal, designing and optimizing the runtime layer that enables machine learning models to perform efficiently without relying on cloud infrastructure. The position sits at the intersection of systems engineering and AI, where performance, stability, and scalability are critical. You will collaborate with researchers and product teams to bring advanced models into production environments. With a strong focus on low-level optimization and architecture, your work will directly shape the future of decentralized, peer-to-peer AI experiences. This is an ideal role for engineers who enjoy deep technical challenges and ownership of core infrastructure.
Accountabilities
In this role, you will be responsible for designing, optimizing, and maintaining the inference layer that enables high-performance AI execution on edge devices. You will ensure systems are robust, efficient, and scalable across diverse hardware environments.
- Develop and optimize C++-based inference systems for deploying AI models on edge devices.
- Enhance and adapt inference engines such as llama.cpp, ggml, and ONNX for improved performance and compatibility.
- Improve runtime efficiency, focusing on memory usage, latency, throughput, and long-session stability.
- Collaborate with research teams to transition models from experimentation to production-ready deployments.
- Define and maintain core abstractions that support scalable and maintainable inference capabilities.
- Integrate AI-driven features into existing products, ensuring seamless performance and reliability.
- Continuously evaluate and implement new technologies to improve system capabilities and efficiency.
Requirements
You are a highly skilled engineer with a strong foundation in systems programming and machine learning, capable of working on complex, performance-critical AI infrastructure.
- Strong programming expertise in C++, with additional experience in JavaScript considered a plus.
- Proven experience with inference frameworks such as llama.cpp, ggml, ONNX, or similar technologies.
- Solid understanding of deep learning concepts, including transformers, LLMs, and diffusion models.
- Experience deploying and optimizing machine learning models on edge devices or constrained environments.
- Ability to quickly learn and apply new technologies in a fast-evolving AI landscape.
- Strong problem-solving skills with attention to performance, scalability, and reliability.
- Degree in Computer Science, AI, Machine Learning, or a related field, or equivalent practical experience.
Benefits
- Fully remote, globally distributed work environment
- Opportunity to work on cutting-edge AI and decentralized technologies
- High ownership and impact on core product infrastructure
- Collaboration with top talent in AI, systems engineering, and fintech
- Dynamic, fast-paced environment focused on innovation and experimentation
- Exposure to advanced AI frameworks and next-generation product development
- Competitive compensation aligned with experience and expertise
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free