Skip to content
mimi

Python AI Engineer - RAG Pipelines & Autonomous Agents

LinkedIn

Rajkot · Hybrid Full-time Senior Today

About the role

Summary

A client of Coretek Labs is immediately hiring for a Python AI Engineer – RAG Pipelines & Autonomous Agents

Title: Python AI Engineer – RAG Pipelines & Autonomous Agents

Position type: Full Time/ Contract

Location: Hybrid/Remote

Responsibilities

Generative Agentic AI Engineering

  • Build and optimize LLM driven autonomous agents, multi agent systems, and tool using workflows.
  • Develop Model Context Protocol (MCP) servers and structured context?management frameworks.
  • Architect scalable RAG pipelines (embeddings, vector search, retrieval layers, grounding strategies, prompt engineering).
  • Implement LLM function calling, multi step orchestration, guardrails, evaluation frameworks, and safety/quality controls.

Python Enterprise AI Engineering

  • Build high performance Python microservices and AI APIs using FastAPI, Flask, LangChain, LlamaIndex, and MCP SDKs.
  • Engineer distributed AI systems for large scale inference, retrieval, and multi agent orchestration.

Technologies Ecosystem

  • Use Jupyter Notebooks, Tachyon, and enterprise GenAI platforms for experimentation and model refinement.
  • Leverage GitHub Copilot and modern DevSecOps workflows to accelerate development.
  • Contribute to reusable AI patterns, enterprise accelerators, and Responsible AI guardrails.

Cloud Platform Engineering

  • Deploy and operate AI solutions on Google Cloud Platform (GKE, Vertex AI, Cloud Run, IAM).
  • Containerize and orchestrate AI services using Red Hat OpenShift and enterprise Kubernetes.
  • Build and manage CI/CD pipelines aligned to DevOps best practices.

Data Integration & Retrieval

  • Work with vector databases (MongoDB Atlas Vector Search, Chroma, Pinecone, Redis, pgVector).
  • Build secure, scalable retrieval layers, embedding pipelines, and long term AI memory modules.

Architecture, Governance & Delivery

  • Participate in architecture reviews and contribute to compliant AI governance frameworks.
  • Ensure adherence to risk, security, and regulatory standards.
  • Lead POCs, engineering improvements, and innovation workstreams with minimal oversight.

Requirements (Ideal Candidate)

  • Experience building LLM generative AI or agentic AI systems.
  • Experience in Python (async programming, APIs, microservices, distributed systems).
  • Experience with GCP and OpenShift/Kubernetes for scalable AI deployments.
  • Experience with RAG pipelines, embeddings, vector search, and LLM orchestration.
  • Experience with Jupyter, Tachyon, GitHub Copilot, CI/CD, and modern DevOps tooling.
  • Demonstrated ability to work independently and drive AI innovation.
  • Familiarity with LangChain, LlamaIndex, and APIs for OpenAI, Google Gemini, or similar models.
  • Experience with enterprise observability stacks (Grafana, Cloud Logging, Prometheus).

Requirements

  • Experience building LLM generative AI or agentic AI systems.
  • Experience in Python (async programming, APIs, microservices, distributed systems).
  • Experience with GCP and OpenShift/Kubernetes for scalable AI deployments.
  • Experience with RAG pipelines, embeddings, vector search, and LLM orchestration.
  • Experience with Jupyter, Tachyon, GitHub Copilot, CI/CD, and modern DevOps tooling.
  • Demonstrated ability to work independently and drive AI innovation.
  • Familiarity with LangChain, LlamaIndex, and APIs for OpenAI, Google Gemini, or similar models.
  • Experience with enterprise observability stacks (Grafana, Cloud Logging, Prometheus).

Responsibilities

  • Build and optimize LLM driven autonomous agents, multi agent systems, and tool using workflows.
  • Develop Model Context Protocol (MCP) servers and structured context?management frameworks.
  • Architect scalable RAG pipelines (embeddings, vector search, retrieval layers, grounding strategies, prompt engineering).
  • Implement LLM function calling, multi step orchestration, guardrails, evaluation frameworks, and safety/quality controls.
  • Build high performance Python microservices and AI APIs using FastAPI, Flask, LangChain, LlamaIndex, and MCP SDKs.
  • Engineer distributed AI systems for large scale inference, retrieval, and multi agent orchestration.
  • Use Jupyter Notebooks, Tachyon, and enterprise GenAI platforms for experimentation and model refinement.
  • Leverage GitHub Copilot and modern DevSecOps workflows to accelerate development.
  • Contribute to reusable AI patterns, enterprise accelerators, and Responsible AI guardrails.
  • Deploy and operate AI solutions on Google Cloud Platform (GKE, Vertex AI, Cloud Run, IAM).
  • Containerize and orchestrate AI services using Red Hat OpenShift and enterprise Kubernetes.
  • Build and manage CI/CD pipelines aligned to DevOps best practices.
  • Work with vector databases (MongoDB Atlas Vector Search, Chroma, Pinecone, Redis, pgVector).
  • Build secure, scalable retrieval layers, embedding pipelines, and long term AI memory modules.
  • Participate in architecture reviews and contribute to compliant AI governance frameworks.
  • Ensure adherence to risk, security, and regulatory standards.
  • Lead POCs, engineering improvements, and innovation workstreams with minimal oversight.

Skills

FastAPIGCPGeminiGitHub CopilotGrafanaIAMJupyterKubernetesLangChainLlamaIndexMongoDB Atlas Vector SearchOpenShiftOpenAIPineconePrometheusPythonRedisTachyonVertex AI

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free