Skip to content
mimi

Senior GCP Data Engineer, Kafka Streaming and Vertex AI Pipelines (Contract)

Samay Consulting

Seattle · On-site Contract Senior Yesterday

About the role

About the opportunity

Samay Consulting is hiring a hands-on senior GCP data engineer to embed within the AI team at a luxury retailer based in Seattle. The core of this role is data engineering on GCP, with strong Kafka streaming work and ML platform delivery through Vertex AI pipelines. You will work alongside the client's ML and platform engineers building the customer event store and pipeline infrastructure that powers their machine learning workloads. This is hands-on senior engineering, not architecture-only.

What you will work on

  • BigQuery SQL at scale, including partitioning and clustering strategy, slot tuning, and rewriting expensive queries for cost and performance.
  • Python data pipelines using Polars or Pandas on multi-billion-row datasets, with attention to Parquet layout, partition pruning, and large join performance.
  • Kafka consumers and Flink streaming jobs that feed a customer event store, with customer-keyed partitioning, time-ordered assembly across multiple upstream sources, and a schema that handles mixed event types (clicks, purchases, returns).
  • Vertex AI pipelines built with KFP, packaged in Docker, and deployed through CI to production. You will own pipeline components end to end.

What we are looking for

  • 10+ years of professional data engineering experience, with the bulk of recent work on GCP.
  • Deep BigQuery experience, with a track record of optimizing slow queries and reducing slot consumption on real workloads.
  • Strong Python data engineering at scale using Polars or Pandas, with demonstrated Parquet partitioning and large join performance.
  • Production experience with Kafka and Flink, including state management, checkpointing, watermarks, and backpressure handling. Prior work on event stores or time-ordered customer event systems is a strong plus.
  • Hands-on KFP and Vertex AI pipeline experience, comfortable writing Dockerfiles and managing component containers in production. Prior exposure to ML platform work or AI infrastructure is a plus.
  • Senior-level ownership: able to make design calls, write the code, debug production issues, and explain tradeoffs to staff engineers and ML researchers.

Logistics

  • Onsite in Seattle 4 days per week is a hard requirement.
  • Local candidates are strongly preferred. Strong candidates willing to relocate to Seattle on their own will also be considered.
  • Contract role through Samay Consulting. Open to W2 and C2C.

Skills

BigQueryDockerFlinkGCPKFPKafkaPandasParquetPolarsPythonSQLVertex AI

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free