Skip to content
mimi

Junior GCP Cloud Engineer at Openkyber Idaho

Openkyber

US · Hybrid Full-time Entry Level 1w ago

About the role

Job Title

Junior GCP Cloud Engineer

Location

St. Louis, MI (or) Atlanta, GA (Hybrid Position)

Role Overview

We re looking for an AI Engineer who loves turning data into intelligent systems that actually work in the real world. You ll design, build, and deploy machine learning and AI solutions that power products, improve decisions, and automate complex tasks. This role blends research thinking with hands‑on engineering. If you enjoy experimenting with models but also care about performance, scalability, and clean production code, you ll fit right in.

Key Responsibilities

  • Design, develop, and deploy machine learning and deep learning models
  • Build and maintain data pipelines for training and inference
  • Work closely with product managers, data scientists, and software engineers to turn business needs into AI solutions
  • Fine‑tune and optimize models for accuracy, speed, and scalability
  • Deploy models into production using cloud or on‑prem infrastructure
  • Monitor model performance and retrain when needed
  • Stay up to date with the latest AI research and evaluate new techniques
  • Document systems, experiments, and model behavior clearly

Required Skills and Qualifications

  • Strong programming skills in Python (and familiarity with software engineering best practices)
  • Experience with ML/DL frameworks such as TensorFlow, PyTorch, or JAX
  • Solid understanding of machine learning fundamentals (supervised, unsupervised, and deep learning)
  • Experience with data preprocessing, feature engineering, and model evaluation
  • Familiarity with SQL and working with large datasets
  • Experience deploying models via APIs, batch jobs, or streaming systems
  • Understanding of version control ( Git ) and collaborative development workflows
  • Experience with LLMs, NLP , or computer vision
  • Knowledge of MLOps tools such as MLflow, Kubeflow, Airflow, or SageMaker
  • Experience with Docker and Kubernetes
  • Familiarity with cloud platforms (AWS, Google Cloud Platform, or Azure)
  • Background in data engineering or distributed systems
  • Experience with model monitoring, drift detection, and retraining pipelines

Education & Experience

  • Bachelor s or Master s degree in Computer Science, AI, Data Science, or a related field or equivalent hands‑on experience
  • Experience in building and deploying machine learning models (flexible based on skill level)

Contact

For applications and inquiries, contact: hirings@openkyber.com

Requirements

  • Strong programming skills in Python (and familiarity with software engineering best practices)
  • Experience with ML/DL frameworks such as TensorFlow, PyTorch, or JAX
  • Solid understanding of machine learning fundamentals (supervised, unsupervised, and deep learning)
  • Experience with data preprocessing, feature engineering, and model evaluation
  • Familiarity with SQL and working with large datasets
  • Experience deploying models via APIs, batch jobs, or streaming systems
  • Understanding of version control ( Git ) and collaborative development workflows
  • Experience with LLMs, NLP , or computer vision
  • Knowledge of MLOps tools such as MLflow, Kubeflow, Airflow, or SageMaker
  • Experience with Docker and Kubernetes
  • Familiarity with cloud platforms (AWS, Google Cloud Platform, or Azure)
  • Background in data engineering or distributed systems
  • Experience with model monitoring, drift detection, and retraining pipelines

Responsibilities

  • Design, develop, and deploy machine learning and deep learning models
  • Build and maintain data pipelines for training and inference
  • Work closely with product managers, data scientists, and software engineers to turn business needs into AI solutions
  • Fine-tune and optimize models for accuracy, speed, and scalability
  • Deploy models into production using cloud or on-prem infrastructure
  • Monitor model performance and retrain when needed
  • Stay up to date with the latest AI research and evaluate new techniques
  • Document systems, experiments, and model behavior clearly

Skills

AirflowAWSAzureComputer visionData engineeringDeep learningDistributed systemsDockerGitGoogle Cloud PlatformJAXKubernetesLLMsMLflowMLopsNLPPythonPyTorchSageMakerSQLSupervised learningTensorFlowUnsupervised learning

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free