Senior Data Scientist – Computer Vision & Video Machine Learning
AST SpaceMobile
About the role
About
AST SpaceMobile is building the first and only global cellular broadband network in space to operate directly with standard, unmodified mobile devices based on our extensive IP and patent portfolio and designed for both commercial and government applications. Our engineers and space scientists are on a mission to eliminate the connectivity gaps faced by today’s five billion mobile subscribers and finally bring broadband to the billions who remain unconnected.
We are seeking a Senior Data Scientist – Computer Vision & Video Machine Learning to design, develop, and deploy advanced visual AI solutions across a complex physical and operational environment. This role focuses on building production‑grade computer vision and video machine learning systems that enable automated inspection, physical‑world understanding, and vision‑guided robotics.
The ideal candidate brings deep expertise in modern CV and video ML architecture, strong data science fundamentals, and disciplined MLOps practices to deliver scalable, reliable, real‑world visual intelligence systems.
Responsibilities
- Design, train, and deploy computer vision and video ML models for automated visual inspection, diagnostics, and physical‑world analysis
- Develop video understanding and temporal ML pipelines for learning physical procedures and enabling vision‑guided automation
- Build perception models that allow automated and robotic systems to recognize objects, understand spatial context, and track task execution
- Develop object detection, semantic segmentation, classification, and anomaly detection models across still imagery and video streams
- Implement robust data cleaning and preparation workflows for image and video datasets, addressing noise, lighting variation, calibration drift, temporal alignment, and class imbalance
- Design state‑space and feature representations for vision‑guided reinforcement learning agents and adaptive control systems
- Develop recursive and online training pipelines that continuously improve models using production feedback and newly collected data
- Apply physics‑informed ML constraints where visual patterns correlate with underlying physical processes
- Implement MLOps best practices including experiment tracking, model versioning, automated validation pipelines, model registry management, and production monitoring
- Collaborate with engineering teams to optimize vision model inference for deployment on edge devices, embedded systems, and cloud infrastructure
- Manage large‑scale image and video annotation pipelines in partnership with domain experts
- Communicate model performance, system impact, and insights through clear, executive‑facing analytics and visualizations
Qualifications
Education
- Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Robotics, Machine Learning, Statistics, or a related quantitative field
- Equivalent professional experience will be considered
Experience
- Minimum of 6 years of experience in data science or machine learning
- At least 3 years of hands‑on experience focused on computer vision and/or video machine learning with production deployments
Preferred Qualifications
- Experience with visual inspection, industrial quality systems, or automated perception pipelines
- Background in robotic perception, manipulation, or autonomous system development
- Experience with video‑based imitation learning or learning from demonstration
- Exposure to complex physical systems, regulated environments, or safety‑critical applications
- Advanced degree (PhD) with applied research in computer vision, video ML, robotics, or related fields
- Experience with generative models for synthetic data generation or anomaly detection
Soft Skills
- Strong interpersonal skills and ability to collaborate effectively within cross‑functional teams
- Excellent written and verbal communication skills, including executive‑level presentations
- Meticulous attention to detail to ensure accuracy, reproducibility, and reliability of models
- Strong problem‑solving and analytical thinking skills
- Ability to operate effectively in fast‑paced, ambiguous environments
- Ownership mindset with a focus on delivering production‑ready solutions
Technology Stack
- Python machine learning and computer vision frameworks (e.g., PyTorch, TensorFlow, OpenCV)
- Deep learning architectures for vision and video (object detection, segmentation, temporal modeling)
- MLOps and experiment tracking tools (e.g., MLflow, Weights & Biases, CI/CD for ML)
- Data annotation and dataset management tools
- SQL and data pipelines for large‑scale unstructured visual data
- Edge and cloud deployment technologies for ML inference
Physical Requirements
- Ability to work in a standard office or remote environment
- Ability to use a computer and multiple monitors for extended periods
- Ability to participate in meetings, design reviews, and collaborative working sessions
Requirements
- Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Robotics, Machine Learning, Statistics, or a related quantitative field
- Equivalent professional experience will be considered
- A minimum of 6 years of experience in data science or machine learning
- At least 3 years of hands-on experience focused on computer vision and/or video machine learning with production deployments
Responsibilities
- Design, train, and deploy computer vision and video ML models for automated visual inspection, diagnostics, and physical-world analysis
- Develop video understanding and temporal ML pipelines for learning physical procedures and enabling vision-guided automation
- Build perception models that allow automated and robotic systems to recognize objects, understand spatial context, and track task execution
- Develop object detection, semantic segmentation, classification, and anomaly detection models across still imagery and video streams
- Implement robust data cleaning and preparation workflows for image and video datasets, addressing noise, lighting variation, calibration drift, temporal alignment, and class imbalance
- Design state-space and feature representations for vision-guided reinforcement learning agents and adaptive control systems
- Develop recursive and online training pipelines that continuously improve models using production feedback and newly collected data
- Apply physics-informed ML constraints where visual patterns correlate with underlying physical processes
- Implement MLOps best practices including experiment tracking, model versioning, automated validation pipelines, model registry management, and production monitoring
- Collaborate with engineering teams to optimize vision model inference for deployment on edge devices, embedded systems, and cloud infrastructure
- Manage large-scale image and video annotation pipelines in partnership with domain experts
- Communicate model performance, system impact, and insights through clear, executive-facing analytics and visualizations
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free