Skip to content
mimi

Senior Software Engineer

Ngrok

Hybrid Full-time Senior $180k – $225k/yr 1mo ago

About the role

About ngrok Inc.

ngrok is an all-in-one cloud networking platform that secures, transforms, and routes traffic to services running anywhere. Instead of cobbling together nginx, NLBs, VPNs, model routers, and oodles of other tools, developers solve every networking problem with one gateway. Doesn’t matter if they’re sharing localhost or running AI workloads in production.

We're trusted by more than 9 million developers at companies like GitHub, Okta, HashiCorp, and Twilio. What started as a way to put your local app on a public URL has grown into a universal gateway for API delivery, AI inference, device fleets, and site‑to‑site connectivity. It’s the same ngrok that millions of developers have loved and leaned on every day for years, now with the power to run production traffic at scale.

A few things you should know:

  • We are obsessed with our pets, Viper sunglasses and Bufo (yes, the toad)
  • We have a designated Chief Emoji Officer – they are vital to our success!
  • We like software that’s serious and culture that’s not

About the Data Platform Team

The Data Platform team owns the data platform and analytics systems that power decision‑making across ngrok. We handle ingestion, modeling, metrics, and reporting—the systems that make sure every event is counted correctly and every number in a deck can be defended.

  • Manage about 500 TiB of data
  • Run a Dagster instance with over 1,600 assets
  • Maintain 550+ dbt models
  • Own Flink streaming pipelines that process ~22,000 messages per second on average

Our data is used across all teams at ngrok, from marketing to financial reporting. Systems must be correct, explainable, and resilient under real‑world conditions: traffic spikes, schema changes, late‑arriving events, and other challenges of a large, globally distributed system.

We treat data as a product: reliable, observable, well‑modeled, and thoughtfully designed. The Data Platform team is part of the Engineering organization and doesn’t live in a silo.


What You’ll Actually Do

  • Build the data backbone: Design and evolve pipelines and orchestration systems that move data across ngrok—from product events to financial reporting. Ingestion, transformation, modeling, reliability.
  • Make the numbers make sense: Own core business and product datasets (usage, revenue, growth, performance) and ensure they’re accurate, reconciled, and trusted.
  • Turn raw events into decision‑ready insight: Build and refine models that power dashboards, planning, forecasting, and experimentation. Clean schemas, durable definitions, metrics people actually align on.
  • Raise the bar on data reliability: Implement validation, testing, observability, and monitoring across data systems. Pipelines shouldn’t silently fail. Dashboards shouldn’t drift. Finance shouldn’t find surprises.
  • Own the platform as it scales: Improve performance, cost efficiency, and architectural design across the data stack (Airbyte, Dagster, dbt, Athena, Flink, Superset, and beyond).
  • Partner across the company: Work closely with Product, Engineering, GTM, Finance, and Leadership to build systems that make hard questions easy to answer.

You Might Be a Great Fit If…

  • You’re familiar with Python, SQL, and Scala
  • You’re also comfortable in a language such as Go, Rust, C++, or Java (bonus points for Go)
  • You write production‑quality code and treat data systems like real software—not just queries in a notebook
  • You’re interested in AWS infrastructure and Kubernetes, managed through Infrastructure as Code (Terraform or similar) — not click ops
  • You’ve built and operated large‑scale event streams, product telemetry, or high‑volume ingestion pipelines in production
  • You enjoy thinking about data models, invariants, lineage, and failure modes
  • You care about data quality and observability, and you design systems that make errors visible—not silent
  • You’re the person people ping when the numbers don’t add up—and you actually enjoy figuring out why

Extra credit if you’ve worked on

  • Usage‑based billing, metering, revenue, or financial reporting systems
  • Event‑driven or streaming data architectures
  • Customer‑facing dashboards or internal executive reporting

Tech Stack

  • Infrastructure: AWS, Kubernetes, Terraform, Helm, Buildkite
  • Data stack (self‑hosted on Kubernetes): Dagster, Superset, Airbyte, Flink
  • Warehouse: Athena with Apache Iceberg; some workloads on ClickHouse Cloud
  • Languages: Python, Scala 3 (data code); Go, TypeScript (core code)
  • Tools: dbt for SQL modeling, Postgres for persistence, Kafka for streaming, Protobuf for service boundaries, React for UI, GitHub for workflow

Location

  • Remote for candidates outside of the Bay Area
  • Hybrid for candidates within commuting distance to San Francisco (office attendance on Tuesdays and Wednesdays)

Sponsorship

All candidates must be US‑based and legally authorized to work in the United States. ngrok cannot provide visa sponsorship for this position.


Compensation

Senior Software Engineer

  • Tier 1 (SF, LA, Seattle, NYC): $180,000 – $225,000
  • Tier 2 (rest of US): $165,600 – $207,000

Software Engineer III

  • Tier 1 (SF, LA, Seattle, NYC): $160,000 – $200,000
  • Tier 2 (rest of US): $147,200 – $184,000

Compensation is evaluated based on qualifications, impact, internal equity, market data, and location. Includes salary and equity. #LI-Remote


Full‑Time Employee Benefits

  • Health: Full premiums covered for base healthcare, dental, and vision; half covered for dependents; mental‑health support.
  • Retirement: 401(k) with 100 % match up to 3 % of salary and 50 % match up to another 2 %.
  • Time off: Open, flexible vacation policy.
  • Parental leave: Up to 16 weeks for birth, up to 8 weeks for other new parents.
  • Professional development: Annual budget for books, courses, conferences; home‑office/desk stipend.
  • Remote work support: Co‑working space stipend for non‑SF locations.
  • On‑site meals: Lunch provided 2 × + per week for employees at the San Francisco office.
  • Company offsites: Twice a year, part strategy, part bonding.
  • Feedback & compensation: Bi‑annual reviews for feedback and compensation adjustments.

Requirements

  • Familiar with Python, SQL, and Scala
  • Comfortable in a language such as Go, Rust, C++, or Java
  • Comfortable writing production-quality code and treating data systems like real software
  • Interested in AWS infrastructure and Kubernetes, managed through Infrastructure as Code
  • Built and operated large-scale event streams, product telemetry, or high-volume ingestion pipelines in production
  • Enjoy thinking about data models, invariants, lineage, and failure modes
  • Care about data quality and observability, and design systems that make errors visible

Responsibilities

  • Design and evolve the pipelines and orchestration systems that move data across ngrok.
  • Own core business and product datasets and ensure they’re accurate, reconciled, and trusted.
  • Build and refine the models that power dashboards, planning, forecasting, and experimentation.
  • Implement validation, testing, observability, and monitoring across our data systems.
  • Improve performance, cost efficiency, and architectural design across our data stack.
  • Work closely with Product, Engineering, GTM, Finance, and Leadership.

Benefits

health insurancedental insurancevision insurancemental health support401(k) matchingflexible vacation policyparental leaveprofessional development budgethome office stipendco-working space stipendcompany offsitesbi-annual reviews

Skills

AirbyteAWSAthenaBuildkiteClickhouse ClouddbtDockerEC2FlinkGoHelmIcebergJavaJavaScriptKafkaKubernetesPostgresProtobufPythonReactScalaSupersetTerraformTypeScript

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free