Skip to content
mimi

Product Engineer

fh daata

India · flexible Full-time Lead Today

About the role

About

Logix (Fort Hill) is redefining how the world understands complex, high‑stakes documents—contracts, financial records, regulatory filings. We are building an AI‑native document intelligence platform that transforms unstructured chaos into structured, auditable truth powered by multi‑agent systems, human‑in‑the‑loop workflows, and both frontier and open‑source AI models.

Why This Role Is Different

  • Global role collaborating across time zones with engineering, AI, audit, and product teams.
  • Operates on ownership over tasks, accountability over activity, and impact over optics.
  • You build it, you own it, you evolve it.

Responsibilities

AI Orchestration, Models & Intelligence Layer

  • Design and scale end‑to‑end AI pipelines from ingestion to structured outputs.
  • Work with both proprietary (OpenAI, Anthropic) and open‑source models.
  • Experiment with and optimize locally deployed models (e.g., NVIDIA stack, on‑device inference, fine‑tuning workflows).
  • Contribute to model selection, evaluation, and training strategies, not just consumption.
  • Build human‑in‑the‑loop systems where AI meets expert validation.

Backend Systems & APIs

  • Architect and own FastAPI services across the full document lifecycle.
  • Build clean, scalable APIs powering internal systems and external integrations.
  • Own async‑first backend architecture with PostgreSQL/Supabase.

Infrastructure, Scale & Reliability

  • Build resilient async task orchestration systems, including retry logic, dead‑letter queues, concurrency control, and failure recovery.
  • Own and evolve CI/CD pipelines (Docker, AWS ECS/ECR).
  • Drive production reliability through debugging, monitoring, and root‑cause analysis.

Requirements

  • 4+ years building production‑grade Python backend systems.
  • Deep expertise in FastAPI + async Python (understanding the event loop).
  • Experience working with LLMs in production (OpenAI, Anthropic, or open‑source equivalents).
  • Strong PostgreSQL/Supabase knowledge (performance, schema design, async patterns).
  • Hands‑on AWS experience (S3, ECS/ECR, IAM).
  • Experience with document parsing / OCR pipelines.
  • Ownership of Docker + CI/CD pipelines.
  • Experience with open‑source model training, fine‑tuning, or local deployment (e.g., LLaMA variants, Mistral).
  • Familiarity with GPU‑based workflows (NVIDIA stack, local inference, optimization).
  • Built systems using LangGraph / LangChain with stateful agent orchestration.
  • Strong testing discipline (pytest, async testing).
  • Experience with Stripe billing systems.
  • Background in regulated or document‑heavy industries.

How We Work

  • You own what you ship: design, deploy, production, iteration.
  • Collaboration with a global, high‑performance team.
  • Trusted to make decisions without waiting for approval.
  • Expected to go deep, move fast, and deliver quality.

If you want to work at the intersection of AI systems and real‑world impact, go beyond APIs into model‑level thinking and experimentation, build with both cutting‑edge and open‑source AI stacks, and own your work end‑to‑end—Logix is where you do it.

Requirements

  • 4+ years building production-grade Python backend systems.
  • Deep expertise in FastAPI + async Python (you understand the event loop, not just syntax).
  • Experience working with LLMs in production (OpenAI, Anthropic, or open-source equivalents).
  • Strong PostgreSQL/Supabase knowledge (performance, schema design, async patterns).
  • Hands-on AWS experience (S3, ECS/ECR, IAM).
  • Experience with document parsing / OCR pipelines.
  • Ownership of Docker + CI/CD pipelines.
  • Experience with open-source model training, fine-tuning, or local deployment (e.g., LLaMA variants, Mistral, etc.).
  • Familiarity with GPU-based workflows (NVIDIA stack, local inference, optimization).
  • Built systems using LangGraph / LangChain with stateful agent orchestration.
  • Strong testing discipline (pytest, async testing).
  • Experience with Stripe billing systems.
  • Background in regulated or document-heavy industries.

Responsibilities

  • Design and scale end-to-end AI pipelines from ingestion to structured outputs.
  • Work across both proprietary (OpenAI, Anthropic) and open-source models.
  • Experiment with and optimize locally deployed models (e.g., NVIDIA stack, on-device inference, fine-tuning workflows).
  • Contribute to model selection, evaluation, and training strategies not just consumption.
  • Build human-in-the-loop systems where AI meets expert validation.
  • Architect and own FastAPI services across the full document lifecycle.
  • Build clean, scalable APIs powering internal systems and external integrations.
  • Own async-first backend architecture with PostgreSQL/Supabase.
  • Build resilient async task orchestration systems: Retry logic, Dead-letter queues, Concurrency control, Failure recovery.
  • Own and evolve CI/CD pipelines (Docker, AWS ECS/ECR).
  • Drive production reliability through debugging, monitoring, and root-cause analysis.

Skills

AWSAWS ECSAWS ECRAWS IAMAWS S3AnthropicAsync PythonDockerFastAPIGPULangChainLangGraphLLaMAMistralNVIDIAOpenAIPostgreSQLPythonpytestStripeSupabase

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free