AI-Native Back End Engineer
ATTIX
About the role
About This Role
Vama is a messaging platform where AI agents are first-class participants in conversations — not bolted-on bots, not separate apps, but native members of your chats. We’re also building a business communication product with AI agents built in.
You own two critical things: making AI agents feel native inside Vama conversations, and building the business/workspace features that turn Vama into the place where small teams communicate and get work done. You’ll work across both AWS (where agent infrastructure lives) and GCP (where our chat platform runs).
We’re looking for someone who lives at the intersection of LLM agent architectures and backend systems engineering. If you’ve built agent integrations, understand tool orchestration and context management, AND can write production Go — this is your role.
What You’ll Build
Agent Integration
- The bridge between our Node.js agent runtime (AWS) and Vama’s Go microservices (GCP) — cross-cloud message routing through NATS so agents send and receive messages like any other chat participant.
- Agent-in-conversation UX infrastructure — agents that respond to mentions, execute multi-step tasks, report back with results, share files, and maintain conversation context.
- Tool orchestration layer — define and expose Vama-native tools (calendar, file sharing, payments, search) that agents can invoke on behalf of users.
- Context and memory management — how agents maintain awareness across conversations, channels, and workspaces without blowing through token limits.
- API key management and model routing — support for bring-your-own-key, Vama-provided credits, and model selection per agent instance.
Business Platform
- Workspace and organization system — multi-tenant business accounts with admin controls, member management, and role-based permissions.
- Business billing and subscription infrastructure — per-seat pricing, usage-based agent compute billing, plan management.
- Admin dashboard APIs — usage analytics, member activity, agent usage, cost tracking per workspace.
- Business integrations foundation — webhooks, API access for workspace data, integration points for third-party tools.
What We’re Looking For
Required
- 4+ years of production backend experience, with meaningful Go experience (2+ years preferred, but depth of understanding matters more than years).
- LLM agent architecture knowledge — you understand tool use patterns, ReAct loops, context window management, memory systems, and agent orchestration frameworks. You’ve built with or on top of agent frameworks, not just called chat completion APIs.
- Distributed systems fluency — you’ve built services that communicate over message queues, handle eventual consistency, and manage distributed state. Bonus if you’ve worked across multi-cloud architectures.
- API design with gRPC or ConnectRPC — protobuf schemas, backward compatibility, RPC-based service interfaces.
- Event-driven architecture — experience with NATS, Kafka, or similar. You understand pub/sub, queue groups, and at-least-once delivery.
- Node.js proficiency — our agent runtime is Node.js. You need to be able to read, modify, and extend it.
- Comfortable working across AWS and GCP — agent infrastructure runs on AWS, chat services on GCP. You’ll bridge both daily.
- You ship 3–5x using AI coding tools (Claude Code, Cursor, etc.). Non-negotiable. We will test for this.
Preferred
- Experience with open-source agent frameworks (LangChain, CrewAI, AutoGen, or similar) — ideally you’ve contributed to or built production systems on top of them.
- Real-time communication experience — WebSocket, WebRTC, LiveKit, or similar.
- Multi-tenancy / SaaS platform experience — you’ve built workspace or organization systems with isolation, billing, and admin tooling.
- Cassandra or wide-column store experience — Vama’s chat backend runs on Cassandra on GCP.
- AWS experience (EKS, Lambda, SQS, S3) alongside GCP — multi-cloud fluency is a strong advantage.
- Experience building developer platforms or bot frameworks.
Why This Role
Two technically deep surfaces — agent integration and business platform — on a small team where your work ships to users fast. You’ll have ownership across the stack and direct input on architecture. Modern tooling, no legacy baggage, and colleagues who take their craft seriously.
Contract
90 Day Contract to Hire Role
Requirements
- 4+ years of production backend experience, with meaningful Go experience (2+ years preferred, but depth of understanding matters more than years)
- LLM agent architecture knowledge
- Distributed systems fluency
- API design with gRPC or ConnectRPC
- Event-driven architecture experience
- Node.js proficiency
- Comfortable working across AWS and GCP
- You ship 3–5x using AI coding tools (Claude Code, Cursor, etc.)
Responsibilities
- Making AI agents feel native inside Vama conversations
- Building the business/workspace features that turn Vama into the place where small teams communicate and get work done
- Agent Integration
- Agent-in-conversation UX infrastructure
- Tool orchestration layer
- Context and memory management
- API key management and model routing
- Workspace and organization system
- Business billing and subscription infrastructure
- Admin dashboard APIs
- Business integrations foundation
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free