AI Engineer Intern
Nudle
About the role
About NexEra
NexEra is a next-generation XR and AI simulation platform that trains humans today — and will train intelligent agents and robots tomorrow. Our simulation software is used by universities, TVET colleges, governments, and industry partners to deliver immersive, hands-on training at scale.
We are now expanding the platform’s AI capabilities to automatically generate 3D learning content, intelligent avatars, and adaptive role-play — while building the foundations for future robotics training using XR simulations, computer vision, and AI.
We are looking for an AI Engineer who is passionate about generative AI, multimodal systems, 3D pipelines, intelligent avatars, and robotics.
Role Overview
The AI Engineer role evolves in two phases:
PHASE 1 — Build NexEra’s Generative AI Systems
The AI Engineer will lead the development of NexEra’s AI-driven content generation pipelines, enabling the creation of lessons, 3D assets, avatar behaviours, and interactive learning experiences from simple prompts.
This includes:
- Building AI → 3D asset pipelines
- Natural language → avatar behaviour systems
- AI scenario and lesson generation
- Automated assessments and explanations
- AI-powered interaction inside XR simulations
This phase is ideal for someone excited about combining: LLMs, multimodal AI, 3D graphics, computer vision, and simulation technologies.
You will work closely with our XR development and creative teams to build the core AI features that power NexEra’s human-training platform.
PHASE 2 — Expand NexEra Into Agent & Robot Training (Future Roadmap)
Once the generative AI foundation is in place, the role evolves to support NexEra’s robot learning pipeline, allowing robots and intelligent agents to learn from:
- XR-based human demonstrations
- AI-generated synthetic data
- Simulation-based reinforcement learning
- Computer vision and multimodal models
You will help build the platform that trains both humans and robots inside the same immersive simulation ecosystem.
Key Responsibilities
PHASE 1 — Generative AI & Learning Systems
- AI-Generated 3D Asset Pipelines
- Build systems that convert text or images into usable 3D models
- Process, optimise, convert, and centre assets (GLB workflows)
- Integrate models into viewers (Babylon.js, Three.js, Unity)
- Generate educational summaries and contextual metadata
- Natural Language → Avatar Behaviour
- Interpret commands using AI
- Map text to animation sequences
- Drive avatars in Unity/Babylon.js
- Generate AI explanations for actions
- Design behaviour mapping logic
- AI Lesson, Scenario & Assessment Generation
- Build multimodal pipelines that create learning modules from prompts
- Generate role-play interactions, dialogues, and tasks
- Develop AI-powered assessment and scoring tools
- Produce real-time feedback and coaching logic
- Simulation Integration
- Insert AI-generated assets into XR scenes
- Support dynamic scenario modification via AI
- Build tools enabling non-technical creators to customise simulations
- Data, Evaluation & UX Thinking
- Collect behaviour data from learners
- Build models to classify, summarise, or evaluate performance
- Ensure AI outputs are accurate, educational, and safe
PHASE 2 — Robotics & Intelligent Agent Training (Future Work)
- Build NexEra’s Robot Learning System
- Design ML pipelines for robot training (RL, IL, motion planning)
- Translate human demonstrations in XR into robot datasets
- Build interfaces allowing robots to learn skills inside simulations
- Robotics Simulation Integration
- Work with Unity or Isaac Sim to create robot training environments
- Develop digital twins for agricultural, engineering, and construction tasks
- Implement Sim2Real techniques for policy transfer
- Computer Vision & Perception
- Develop models for object detection, tracking, environment understanding
- Train vision-language-action models for instruction following
- Convert WebXR/AR data into perception datasets
- Hardware Integration (e.g., Jetson Orin, Unitree)
- Deploy trained policies onto embedded robotics hardware
- Work with ROS/ROS2 control stacks
- Conduct real-world behaviour testing and refine policies
- Long-term Data Engine Development
- Build datasets from human actions and simulation events
- Improve training efficiency using synthetic data
- Contribute to NexEra’s growing library of reusable agent/robot skills
Required Skills & Experience
Core AI + ML
- Strong experience in Python, PyTorch, TensorFlow, or JAX
- Experience with multimodal or generative AI models
- Understanding of language → action mapping
- Ability to build ML pipelines end-to-end
3D & Simulation Skills
- Experience with Unity, Babylon.js, Three.js, or similar
- Working knowledge of GLB/FBX/OBJ processing
- Familiarity with 3D assets, animation systems, or digital twins
Computer Vision
- Ability to implement perception models (detection, segmentation, pose)
- Experience with image/video pipelines
- Familiarity with depth, camera calibration, or sensor processing (bonus)
Robotics (For Phase 2, Not Required Initially)
- Experience with ROS or ROS2
- Understanding of robot control/motion planning
- Experience deploying models to embedded hardware (e.g., Jetson)
- Familiarity with RL or imitation learning frameworks
Bonus Skills (Highly Valued but Not Required)
- Experience with 3D reconstruction or generative 3D models
- Fine-tuning LLMs for behaviour control or simulations
- Experience with humanoid or quadruped robots (InMoov, Poppy, NimbRo, Unitree)
- Interest in building commercial robotics training systems
- Experience with AR overlays or mixed reality interaction
Job Type: Internship Contract length: 6 months Work Location: In person
Requirements
- Strong experience in Python, PyTorch, TensorFlow, or JAX
- Experience with multimodal or generative AI models
- Understanding of language → action mapping
- Ability to build ML pipelines end-to-end
- Experience with Unity, Babylon.js, Three.js, or similar
- Working knowledge of GLB/FBX/OBJ processing
- Familiarity with 3D assets, animation systems, or digital twins
- Ability to implement perception models (detection, segmentation, pose)
- Experience with image/video pipelines
- Familiarity with depth, camera calibration, or sensor processing (bonus)
- Experience with ROS or ROS2
- Understanding of robot control/motion planning
- Experience deploying models to embedded hardware (e.g., Jetson)
- Familiarity with RL or imitation learning frameworks
Responsibilities
- Build systems that convert text or images into usable 3D models
- Process, optimise, convert, and centre assets (GLB workflows)
- Integrate models into viewers (Babylon.js, Three.js, Unity)
- Generate educational summaries and contextual metadata
- Interpret commands using AI
- Map text to animation sequences
- Drive avatars in Unity/Babylon.js
- Generate AI explanations for actions
- Design behaviour mapping logic
- Build multimodal pipelines that create learning modules from prompts
- Generate role-play interactions, dialogues, and tasks
- Develop AI-powered assessment and scoring tools
- Produce real-time feedback and coaching logic
- Insert AI-generated assets into XR scenes
- Support dynamic scenario modification via AI
- Build tools enabling non-technical creators to customise simulations
- Collect behaviour data from learners
- Build models to classify, summarise, or evaluate performance
- Ensure AI outputs are accurate, educational, and safe
- Design ML pipelines for robot training (RL, IL, motion planning)
- Translate human demonstrations in XR into robot datasets
- Build interfaces allowing robots to learn skills inside simulations
- Work with Unity or Isaac Sim to create robot training environments
- Develop digital twins for agricultural, engineering, and construction tasks
- Implement Sim2Real techniques for policy transfer
- Develop models for object detection, tracking, environment understanding
- Train vision-language-action models for instruction following
- Convert WebXR/AR data into perception datasets
- Deploy trained policies onto embedded robotics hardware
- Work with ROS/ROS2 control stacks
- Conduct real-world behaviour testing and refine policies
- Build datasets from human actions and simulation events
- Improve training efficiency using synthetic data
- Contribute to NexEra’s growing library of reusable agent/robot skills
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free