Skip to content
mimi

Backend Engineer, Metrics - Software Delivery & Data Management Firm

Andiamo

New York · On-site Full-time Senior 1w ago

About the role

Backend Engineer — Metrics Platform

About The Role

Build the metrics and telemetry backbone that powers observability and analytics. You’ll design ingestion, storage, and query paths for high-cardinality, high-throughput time-series data.

What You’ll Do

  • Design distributed ingestion pipelines with backpressure and durability.
  • Implement time-series storage, retention, downsampling, and compaction.
  • Expose query/search APIs for dashboards, alerts, and analytics.
  • Tackle cardinality and cost efficiency at scale.
  • Integrate with tracing/logging to deliver unified observability.

What We’re Looking For

  • 5+ years in backend/distributed systems.
  • Proficiency in Go/Java/Python and streaming (Kafka, Pulsar).
  • Experience with TSDB/OLAP stores (Prometheus, ClickHouse, M3, Cortex, InfluxDB).
  • Strong data modeling and performance tuning skills.

About Andiamo

Talent Partners for the AI Revolution. As a globally recognized staffing and consulting firm, we specialize in placing the top 2% of technology and go-to-market professionals with the world’s largest and most well-known companies.

For over 20 years, we've maintained the status of tier-one vendor for firms such as Palantir, Amazon, Fluidstack, Bloomberg, Relativity Space, Firefly, MasterCard, Visa, Two Sigma, Citadel, as well as other major financial services firms, elite hedge funds, Google-backed tech start-ups, and major software firms.

Our talent solutions include Permanent Placement, Contract Staffing, Executive Search, and Dedicated Recruiting Services (RPO). Find out more at www.andiamogo.com

Requirements

  • Proficiency in Go/Java/Python and streaming (Kafka, Pulsar).
  • Experience with TSDB/OLAP stores (Prometheus, ClickHouse, M3, Cortex, InfluxDB).
  • Strong data modeling and performance tuning skills.

Responsibilities

  • Design distributed ingestion pipelines with backpressure and durability.
  • Implement time-series storage, retention, downsampling, and compaction.
  • Expose query/search APIs for dashboards, alerts, and analytics.
  • Tackle cardinality and cost efficiency at scale.
  • Integrate with tracing/logging to deliver unified observability.

Skills

ClickHouseCortexGoInfluxDBJavaKafkaM3PulsarPrometheusPython

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free