Skip to content
mimi

Forward Deployed AI Engineer

Omnilex

Hybrid Mid Level CHF 8k – CHF 12k/mo 2w ago

About the role

Why Omnilex?

At Omnilex, we’re on a mission to transform the way lawyers work. Our AI-native platform lets legal professionals enhance their productivity in legal research and automate workflows. We collaborate closely with our clients and iterate at a market-leading pace. In a year, we have gone from an early MVP to a product used daily by thousands of legal professionals at our clients in Switzerland, Germany and Liechtenstein - and are now scaling rapidly across Europe.

We already stand out with handling unique challenges, including our combination of external data, customer‑internal data and our own innovative AI‑first legal commentaries.

You’ll be joining a young, passionate, and dynamic team of 14, with roots at ETH Zurich.

Your role

You like the last mile – the part where an AI product stops being a demo and starts surviving real life: inconsistent documents, weird naming conventions, strict access rules, stakeholders who notice every edge case, and workflows that were never designed for “AI assistants.”

You’re the person who can sit with a legal team, understand what they actually need, translate that into system behavior, and then implement it cleanly. You enjoy being the connective tissue between customers, domain experts, and the core engineering team—shipping practical improvements and leaving behind crisp documentation so the next rollout is smoother.

What you'll do

Customer rollouts & customization (the heart of the job)

  • Lead technical onboarding for new customers: ingest documents, build indexes, map metadata (jurisdiction, authority, recency), and run validation checks.
  • Tune retrieval and reranking behavior to match customer expectations (practice area focus, internal taxonomies, document patterns, relevance definitions).
  • Deliver customer‑specific UX and workflow adaptations: templates, default filters, jurisdiction presets, citation formatting, permission‑aware retrieval, and customized result views.

Production‑grade LLM workflows

  • Adjust prompting and context strategies to meet strict requirements (grounding, traceability, citation style, explanation depth, fallback behavior).
  • Build and enforce guardrails: provenance tracking, source‑grounded generation, “no source no statement” rules, and risk‑aware uncertainty patterns suitable for legal contexts.

Field iteration & quality loops

  • Create small but high‑signal evaluation sets per customer (gold questions, acceptance criteria, “cannot fail” scenarios).
  • Perform fast failure analysis and ship improvements: chunking changes, deduping, reranker adjustments, query interpretation tweaks, caching, and routing strategies.

Latency, cost, and operational reliability

  • Keep response times and usage costs sane through batching, caching, early exits, and practical fallback paths.
  • Track quality signals and usage patterns; convert feedback into measurable fixes and clear acceptance tests.

Cross‑team execution & knowledge capture

  • Work closely with Customer Success and legal experts to convert pain into engineering work.
  • Write deployment playbooks and integration “recipes” so customer solutions become repeatable patterns over time.

What you bring

Must-haves

  • Strong practical experience building or adapting search/retrieval systems in production (hybrid retrieval, reranking, indexing, query understanding).
  • Experience taking LLM features from prototype to stable, real‑world usage.
  • Solid TypeScript/Node.js skills (our core stack).
  • Hands‑on experience with at least one of: Azure AI Search, pgvector/PostgreSQL, OpenSearch/Elasticsearch (or comparable systems).
  • Strong engineering judgment: debugging skills, performance tuning, careful edge‑case handling, and operational thinking.
  • Comfortable working directly with customers: deep technical sessions, trade‑off explanations, and clear written documentation.
  • Fluent English; available full‑time.
  • Hybrid setup: at least two days per week on‑site in Zurich.

Nice‑to-haves

  • German proficiency (many sources and stakeholder conversations are German‑speaking).
  • Experience integrating customer document sources and pipelines (connectors, ETL, access controls).
  • Experience with lightweight evaluation processes (human labeling loops, basic agreement checks, simple dashboards).
  • Familiarity with sparse + dense retrieval approaches (BM25 variants included).
  • Experience running and operating services (Docker a plus).
  • Familiarity with Azure / NestJS / Next.js.
  • Exposure to Swiss / German / US legal systems.

Benefits

  • Tangible customer impact: your work directly affects daily trust and adoption inside legal teams.
  • High ownership: you run deployments end‑to‑end and help define reusable solution patterns.
  • Fast feedback loops: you’ll see real failure modes early and influence product direction with evidence.
  • Compensation: CHF 8’000–12’000 per month + ESOP, depending on experience and skills.

Requirements

  • Strong practical experience building or adapting search/retrieval systems in production (hybrid retrieval, reranking, indexing, query understanding).
  • Experience taking LLM features from prototype to stable, real-world usage.
  • Solid TypeScript/Node.js skills (our core stack).
  • Hands-on experience with at least one of: Azure AI Search, pgvector/PostgreSQL, OpenSearch/Elasticsearch (or comparable systems).
  • Strong engineering judgment: debugging skills, performance tuning, careful edge-case handling, and operational thinking.
  • Comfortable working directly with customers: deep technical sessions, trade-off explanations, and clear written documentation.
  • Fluent English; available full-time.
  • Hybrid setup: at least two days per week on-site in Zurich.

Responsibilities

  • Lead technical onboarding for new customers: ingest documents, build indexes, map metadata (jurisdiction, authority, recency), and run validation checks.
  • Tune retrieval and reranking behavior to match customer expectations (practice area focus, internal taxonomies, document patterns, relevance definitions).
  • Deliver customer-specific UX and workflow adaptations: templates, default filters, jurisdiction presets, citation formatting, permission-aware retrieval, and customized result views.
  • Adjust prompting and context strategies to meet strict requirements (grounding, traceability, citation style, explanation depth, fallback behavior).
  • Build and enforce guardrails: provenance tracking, source-grounded generation, “no source no statement” rules, and risk-aware uncertainty patterns suitable for legal contexts.
  • Create small but high-signal evaluation sets per customer (gold questions, acceptance criteria, “cannot fail” scenarios).
  • Perform fast failure analysis and ship improvements: chunking changes, deduping, reranker adjustments, query interpretation tweaks, caching, and routing strategies.
  • Keep response times and usage costs sane through batching, caching, early exits, and practical fallback paths.
  • Track quality signals and usage patterns; convert feedback into measurable fixes and clear acceptance tests.
  • Work closely with Customer Success and legal experts to convert pain into engineering work.
  • Write deployment playbooks and integration “recipes” so customer solutions become repeatable patterns over time.

Benefits

Tangible customer impactHigh ownershipFast feedback loopsESOP

Skills

Azure AI SearchDockerElasticsearchLLMNestJSNext.jsNode.jsOpenSearchPostgreSQLTypeScript

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free