Data & AI Engineering
Opella
About the role
About the job
We are looking for a hands‑on AI & Data Engineer to build and deploy enterprise‑grade AI capabilities powering Opella’s internal analytics platforms. This is an engineering‑first role, focused on building scalable data + AI systems using LLMs, semantic layers, and cloud data platforms. The role emphasizes applied AI, prompt engineering, and data modeling rather than traditional ML model development.
You will design standardized AI services that enable business teams across commercial, finance, supply chain, and digital functions to interact with data, generate insights, and make faster, smarter decisions.
Main Responsibilities
Data Engineering & Platform Integration
- Design and build scalable data pipelines using Snowflake / cloud data platforms
- Proficient with DBT and Airflow framework
- Develop analytics‑ready data models (facts, dimensions, semantic layers)
- Ensure data quality, consistency, and performance across AI workflows
Applied AI, LLM & Prompt Engineering
- Design and implement LLM‑powered capabilities such as:
- Natural language SQL/query generation
- KPI reasoning and insight generation
- Metadata extraction and entity resolution
- Semantic search and contextual retrieval
- Build and optimize prompt engineering frameworks, including:
- Structured prompting for SQL, JSON, and analytical outputs
- Few‑shot / instruction tuning strategies
- Guardrails to ensure accuracy, consistency, and business alignment
- Prompt evaluation and iterative improvement
- Integrate LLMs via platforms like AWS Bedrock, OpenAI, Anthropic
RAG & Knowledge Systems
- Engineer RAG pipelines and embedding‑based retrieval systems
- Design vector search strategies (indexing, chunking, ranking, grounding)
- Combine structured (data warehouse) and unstructured (docs, PPTs, metadata) sources into unified knowledge layers
AI Services & Engineering Architecture
- Develop APIs and microservices using Python, FastAPI, Docker
- Build orchestration layers for multi‑step AI workflows and agent‑based systems
- Implement inference services, caching layers, and reusable AI components
- Deploy scalable AI services using cloud‑native architectures
Semantic Layer & Business Alignment
- Integrate AI systems with governed semantic layers and metadata
- Ensure outputs align with defined KPIs, business rules, and data standards
- Translate business questions into reusable AI‑powered analytics services
Productionization & Governance
- Establish standards for prompt versioning, monitoring, and evaluation
- Implement logging, performance tracking, and reliability checks
- Ensure secure, privacy‑safe AI usage aligned with enterprise policies
- Support CI/CD, testing, and robust deployment practices
About You
Core Experience
- 4–8 years of experience in AI/Data Engineering or Applied AI systems
- Strong programming skills in Python
- Strong SQL and data modeling experience (Snowflake preferred)
AI / LLM & Prompt Engineering
- Hands‑on experience with LLMs (GPT, Claude, etc.) and prompt engineering for structured outputs (SQL, JSON, insights)
- Few‑shot prompting and instruction design, prompt evaluation and optimization
- Experience building RAG pipelines and embeddings, vector search systems (Pinecone, Elastic, FAISS)
Engineering & Platform Skills
- Experience with FastAPI / Flask, Docker, cloud deployments
- Familiarity with orchestration tools (Airflow, etc.)
- Strong software engineering practices (CI/CD, testing, version control)
Data & Analytics Understanding
- Strong understanding of data warehouse concepts (facts, dimensions, grain)
- Semantic layers and KPI modeling
- Business‑facing analytics use cases
- Experience in CPG, retail or e‑commerce is a plus
Must Have
- Exposure to agent frameworks (LangChain, Autogen)
- Experience with prompt + metadata‑driven systems
- Familiarity with Streamlit or user‑facing data apps
Education
- Bachelor’s degree in Computer Science, Data Engineering, AI, or related field
- Advanced certifications (AWS, Snowflake, Databricks, GCP) are a plus
Why us?
At Opella, you will enjoy doing challenging, purposeful work, empowered to develop consumer brands with passion and creativity. This is your chance to grow new skills and be part of a bold, collaborative, and inclusive culture where people can thrive and be at their best every day.
Our culture:
- All In Together: We keep each other honest and have each other’s backs.
- Courageous: We break boundaries and take thoughtful risks with creativity.
- Outcome‑Obsessed: We are personally accountable, driving sustainable impact and results with integrity.
- Radically Simple: We strive to make things simple for us and simple for consumers, as it should be.
Join us on our mission. Health. In your hands.
Requirements
- 4–8 years of experience in AI/Data Engineering or Applied AI systems
- Strong programming skills in Python
- Strong SQL and data modeling experience ( Snowflake preferred )
- Hands-on experience with: LLMs (GPT, Claude, etc.) Prompt engineering for structured outputs (SQL, JSON, insights) Few-shot prompting and instruction design Prompt evaluation and optimization
- Experience building: RAG pipelines and embeddings Vector search systems (Pinecone, Elastic, FAISS)
- Experience with FastAPI / Flask, Docker, cloud deployments
- Familiarity with orchestration tools (Airflow, etc.)
- Strong software engineering practices (CI/CD, testing, version control)
- Strong understanding of: Data warehouse concepts (facts, dimensions, grain) Semantic layers and KPI modeling Business-facing analytics use cases
- Experience in CPG, retail or ecommerce is a plus
- Exposure to agent frameworks (LangChain, Autogen)
- Experience with prompt + metadata-driven systems
- Familiarity with Streamlit or user-facing data apps
Responsibilities
- Design and build scalable data pipelines using Snowflake / cloud data platforms
- Develop analytics-ready data models (facts, dimensions, semantic layers)
- Ensure data quality, consistency, and performance across AI workflows
- Design and implement LLM-powered capabilities such as: Natural language SQL/query generation KPI reasoning and insight generation Metadata extraction and entity resolution Semantic search and contextual retrieval
- Build and optimize prompt engineering frameworks
- Integrate LLMs via platforms like AWS Bedrock, OpenAI, Anthropic
- Engineer RAG pipelines and embedding-based retrieval systems
- Design vector search strategies (indexing, chunking, ranking, grounding)
- Combine structured (data warehouse) and unstructured (docs, PPTs, metadata) sources into unified knowledge layers
- Develop APIs and microservices using Python, FastAPI, Docker
- Build orchestration layers for multi-step AI workflows and agent-based systems
- Implement inference services, caching layers, and reusable AI components
- Deploy scalable AI services using cloud-native architectures
- Integrate AI systems with governed semantic layers and metadata
- Ensure outputs align with defined KPIs, business rules, and data standards
- Translate business questions into reusable AI-powered analytics services
- Establish standards for prompt versioning, monitoring, and evaluation
- Implement logging, performance tracking, and reliability checks
- Ensure secure, privacy-safe AI usage aligned with enterprise policies
- Support CI/CD, testing, and robust deployment practices
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free