Generative AI Engineer
Capgemini
About the role
About
The AI Engineer will be at the heart of AEP’s product innovation, responsible for designing, implementing, and operationalizing AI-native services that transform the utility sector. This role combines applied AI/ML expertise with strong backend engineering skills, ensuring our agentic systems are not only functional but scalable, secure, and ready production.
As a key member of our engineering team, you will report directly work with the Data and AI Leader and operate within small, agile teams alongside software engineers, security experts, and product managers. You will own the full lifecycle of AI services — from data ingestion and model training to real-time deployment and monitoring — while continuously adapting to evolving requirements in a fast‑moving, high‑stakes industry.
Key Responsibilities
- AI Systems Development: Architect, fine‑tune, and deploy AI agents purpose‑built for utility use cases, including predictive operations, customer engagement, and energy optimization.
- Backend Integration: Build APIs, microservices, and orchestration frameworks that seamlessly connect AI models with enterprise systems and grid‑level data flows.
- Pipeline Ownership: Design and manage the full AI pipeline — ingestion, embeddings, retrieval, evaluation, and continuous deployment — ensuring reliability and scalability.
- AI Risk Mitigation: Address vulnerabilities unique to AI, such as model drift, bias exploitation, adversarial robustness, hallucination control – with sensitivity to regulated environments.
Qualifications and Experience
Education
- Bachelor’s or Master’s in Computer Science, Machine Learning, or related field
Experience
- 5+ years of applied ML/AI engineering experience, ideally with exposure to enterprise/mission‑critical systems. Track record of deploying AI services in production.
- Utilities experience a plus
Technical Skills
- Proficiency in Python, and Java or Golang
- Experience with Agent platforms
- Expertise with ML/LLM frameworks such as PyTorch, TensorFlow, LangChain, or equivalent.
- Experience with vector databases, orchestration frameworks, and modern MLOps practices.
- Strong grounding in cloud‑native architectures (AWS, GCP, Azure).
Soft Skills
- Analytical, collaborative, and comfortable with ambiguity. Ability to thrive in small, high‑velocity teams, balancing experimentation with production rigor.
Location
- Nashville preferred.
Requirements
- Track record of deploying AI services in production.
Responsibilities
- Architect, fine-tune, and deploy AI agents purpose-built for utility use cases, including predictive operations, customer engagement, and energy optimization.
- Build APIs, microservices, and orchestration frameworks that seamlessly connect AI models with enterprise systems and grid-level data flows.
- Design and manage the full AI pipeline — ingestion, embeddings, retrieval, evaluation, and continuous deployment — ensuring reliability and scalability.
- Address vulnerabilities unique to AI, such as model drift, bias exploitation, adversarial robustness, hallucination control – with sensitivity to regulated environments.
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free