MU
Lead Data Engineer / Snowflake Engineer (AWS, Agentic AI)
Momento USA
Montreal · On-site Contract Lead 3d ago
About the role
Role Overview
We are hiring a Lead Data Engineer / Snowflake Engineer with deep expertise in Snowflake on AWS and hands-on exposure to Agentic AI / Generative AI–driven data platforms. This role will lead the design, modernization, and scaling of cloud‑native data architectures that power analytics and AI initiatives.
Key Responsibilities
- Architect, design, and optimize Snowflake data platforms on AWS for high performance and cost efficiency
- Lead end‑to‑end ELT/ETL pipelines using Snowflake, AWS services, and modern data engineering tools
- Implement advanced Snowflake features (Performance Optimization, Warehousing strategy, Data Sharing, Streams & Tasks, Time Travel, Zero Copy Cloning)
- Design data foundations that support Agentic AI and GenAI workloads, including AI‑ready datasets, vectorized data, and metadata‑driven pipelines
- Collaborate with AI/ML teams to enable autonomous agents, LLM‑driven analytics, and intelligent data orchestration
- Provide technical leadership, code reviews, and mentoring to data engineering teams
- Partner with business and product stakeholders to translate analytics and AI requirements into scalable data solutions
Required Skills & Experience
- 8–10 years of experience in Big Data Engineering / Analytics
- Expert‑level Snowflake experience, including large‑scale production deployments
- Strong hands-on experience with AWS (S3, EC2, Lambda, Glue, Redshift/Athena, IAM, CloudWatch, Step Functions)
- Proven experience building cloud‑native data architectures on AWS
- Solid programming skills in Python and SQL
- Experience with data modeling for analytics and AI use cases
- Hands-on or applied exposure to Agentic AI, Generative AI, or AI‑driven data platforms
- Experience leading or mentoring engineering teams in enterprise environments
Highly Desirable
- Experience integrating LLMs, autonomous agents, or AI orchestration frameworks with data platforms
- Exposure to vector databases, embeddings, or AI‑optimized data pipelines
- Experience with dbt, Airflow, Kafka, Spark, or similar tools
- Prior onsite experience in large, complex enterprise data ecosystems
Skills
AirflowAgentic AIAWSAWS AthenaAWS CloudWatchAWS EC2AWS GlueAWS IAMAWS LambdaAWS RedshiftAWS S3AWS Step FunctionsdbtGenerative AIKafkaPythonSnowflakeSparkSQL
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free