LI
AI Systems Architect
LeadStack Inc.
Blue Ash · On-site Contract $80 – $100/hr 1w ago
About the role
Position Summary
Join the AI Enablement team to shape the architectural foundation of our enterprise agent ecosystem. This role focuses on designing and governing the architecture for agent-based integrations, registries, scoring and evaluation infrastructure, grounding patterns, and multi-agent orchestration platforms.
Provide comprehensive technical leadership across engineering, product, data science, security, and cloud teams to build safe, consistent, and reliable agents that perform at an enterprise level with robust observability.
Technical Leadership in AI Agentic Platforms
- Define and develop the enterprise's reference architecture for AI agents, which includes orchestration frameworks, tool integration patterns, MCP servers, registries, and multi-agent coordination.
- Design scalable agent orchestration platforms that support autonomous workflows across various domains, enhancing productivity.
- Ensure operational uptime while adhering to SLAs, planning upgrades, and rolling out new capabilities for the agent platform.
- Establish foundational patterns using semantic layers, vector search, knowledge models, and Retrieval-Augmented Generation (RAG).
- Connect agents to trusted enterprise data, APIs, and business services through developed systems.
- Create architectural patterns that ensure the safe execution of agents aligned with Responsible AI principles.
Excellence in Enterprise Platform Engineering
- Architect scalable, fault-tolerant AI agent platforms within hybrid cloud environments (Azure & GCP).
- Set architecture standards that ensure low latency, high availability, resiliency, and observability.
- Collaborate with cloud and platform engineering teams to deliver secure, containerized, and API-driven infrastructure for agent workloads.
- Develop platform lifecycle patterns, including versioning, release gating, rollback strategies, and performance benchmarking.
- Facilitate cost-efficient scaling of AI workloads across millions of enterprise and customer interactions.
Innovation in Agent Quality, Safety & Evaluation
- Define and operationalize the Agentic SDLC, including safety testing, evaluation frameworks, regression gates, and release readiness criteria.
- Architect systems for continuous agent improvement utilizing automated evaluation pipelines and human feedback loops.
- Establish standards for addressing hallucination mitigation, prompt safety, PII protection, and preventing AI misuse.
- Lead the development of observability and AIOps patterns for agent monitoring, anomaly detection, and operational intelligence.
- Establish performance scoring frameworks for agent quality, reliability, and cost optimization.
Driving Strategic AI Platform Innovation
- Engage with engineering, product, and data science leaders to build intelligent agent platforms tailored to customer and enterprise use cases.
- Pursue innovations in multi-agent systems, LLM-powered workflows, and AI orchestration technologies.
- Assess emerging agent frameworks, tooling, and open standards to inform platform strategy and make informed build-vs-buy decisions.
- Contribute to platform engineering excellence by creating reusable AI infrastructure and enabling capabilities for developers.
- Offer architectural mentorship and technical guidance on agentic AI design, scalable engineering practices, and enterprise AI standards.
Skills
AzureCI/CDGCPKubernetesLangChainLangGraphLLMMCP ServersPythonRAGVector DBs
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free