Skip to content
mimi

COMPUTER VISION ENGINEER (LLM & AI Integration)

Duncan & Ross

UAE · On-site Senior Today

About the role

COMPUTER VISION ENGINEER (LLM & AI Integration)

About the job

COMPUTER VISION ENGINEER (LLM & AI Integration)

Job Summary

We are seeking an experienced Computer Vision Engineer with a strong background in AI and Large Language Models (LLMs). The ideal candidate will design, build, and deploy computer vision solutions that integrate with generative AI and LLM frameworks to interpret, analyze, and describe visual data. This role bridges the gap between image understanding and natural language processing, enabling intelligent visual-language

Responsibilities

  • Develop and implement computer vision models for image classification, object detection, segmentation, facial recognition, and visual understanding.
  • Integrate vision models with LLMs (e.g., GPT, LLaVA, CLIP, or multimodal models) to build systems that interpret and describe visual content.
  • Design AI pipelines that combine text, images, and video data for multimodal learning and reasoning.
  • Utilize deep learning frameworks (TensorFlow, PyTorch, OpenCV) to prototype and deploy models.
  • Collaborate with data scientists and AI researchers to fine-tune vision-language models for specific tasks such as visual QA, captioning, or scene analysis.
  • Implement data preprocessing, augmentation, and annotation pipelines for large‑scale image performance benchmarking, optimization, and deployment of models in production environments.
  • Research and experiment with emerging techniques in Generative AI, multimodal transformers, and neural architecture APIs and tools for internal teams to utilize vision + LLM capabilities.
  • Ensure compliance with ethical AI practices, including bias mitigation and data privacy.

Qualifications

  • Bachelors or Masters degree in Computer Science, AI, Computer Vision, or related field (PhD preferred).
  • 3-7 years of experience in computer vision, deep learning, or multimodal AI.
  • Strong proficiency in Python and frameworks such as PyTorch, TensorFlow, Keras, and OpenCV.
  • Experience integrating LLMs (GPT, Claude, Gemini, or open‑source models) with vision systems.
  • Solid understanding of transformer architectures, CNNs, diffusion models, and attention mechanisms.
  • Familiarity with multimodal datasets (COCO, Visual Genome, etc.) and evaluation metrics for vision tasks.
  • Experience with cloud‑based AI tools (Azure AI, AWS Sagemaker, Google Vertex AI, etc.).
  • Ability to write clean, scalable, and production‑grade code.
  • Strong analytical, problem‑solving, and communication skills.

Preferred Qualifications

  • Experience with multimodal LLM frameworks such as CLIP, BLIP, LLaVA, or Kosmos-2.
  • Background in natural language processing and prompt engineering.
  • Hands‑on experience with edge deployment (NVIDIA Jetson, OpenVINO, ONNX).
  • Knowledge of reinforcement learning, generative models, or 3D vision.
  • Publications or open‑source contributions in AI research are a plus.

Requirements

  • Strong proficiency in Python and frameworks such as PyTorch, TensorFlow, Keras, and OpenCV.
  • Experience integrating LLMs (GPT, Claude, Gemini, or open-source models) with vision systems.
  • Solid understanding of transformer architectures, CNNs, diffusion models, and attention mechanisms.
  • Familiarity with multimodal datasets (COCO, Visual Genome, etc.) and evaluation metrics for vision tasks.
  • Experience with cloud-based AI tools (Azure AI, AWS Sagemaker, Google Vertex AI, etc.).
  • Ability to write clean, scalable, and production-grade code.
  • Strong analytical, problem-solving, and communication skills.

Responsibilities

  • Develop and implement computer vision models for image classification, object detection, segmentation, facial recognition, and visual understanding.
  • Integrate vision models with LLMs (e.g., GPT, LLaVA, CLIP, or multimodal models) to build systems that interpret and describe visual content.
  • Design AI pipelines that combine text, images, and video data for multimodal learning and reasoning.
  • Utilize deep learning frameworks (TensorFlow, PyTorch, OpenCV) to prototype and deploy models.
  • Collaborate with data scientists and AI researchers to fine-tune vision-language models for specific tasks such as visual QA, captioning, or scene analysis.
  • Implement data preprocessing, augmentation, and annotation pipelines for large-scale image performance benchmarking, optimization, and deployment of models in production environments.
  • Research and experiment with emerging techniques in Generative AI, multimodal transformers, and neural architecture APIs and tools for internal teams to utilize vision + LLM capabilities.
  • Ensure compliance with ethical AI practices, including bias mitigation and data privacy.

Skills

AWS SagemakerCLIPCNNsCOCOComputer VisionDeep LearningDiffusion modelsGoogle Vertex AIGPTKerasLLaVAMultimodal AINVIDIA JetsonObject detectionONNXOpenCVOpenVINOPyTorchPythonSegmentationTensorFlowTransformer architecturesVisual GenomeAzure AI

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free