Skip to content
mimi

Engineer

Thinking Machines

On-site Full-time $350k – $475k/yr 6d ago

About the role

About the Role

We’re looking for an engineer to join us and contribute to data infrastructure. You'll join a small, high‑impact team responsible for architecting and scaling the core infrastructure behind distributed training pipelines, multimodal data catalogs, and intelligent processing systems that operate over petabytes of data.

Infrastructure is critical to us: it's the bedrock that enables every breakthrough. You'll work directly with researchers to accelerate experiments, develop new datasets, improve infrastructure efficiency, and enable key insights across our data assets.

If you're excited by distributed systems, large‑scale data mining, open‑source tools like Spark, Kafka, Beam, Ray, and Delta Lake, and enjoy building from the ground up, we'd love to hear from you.

Note: This is an “evergreen role” that we keep open on an ongoing basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you gain more experience, but please avoid applying more than once every 6 months.

What You’ll Do

  • Design, build, and operate scalable, fault‑tolerant infrastructure for LLM research: distributed compute, data orchestration, and storage across modalities.
  • Develop high‑throughput systems for data ingestion, processing, and transformation — including training data catalogs, deduplication, quality checks, and search.
  • Build systems for traceability, reproducibility, and robust quality control at every stage of the data lifecycle.
  • Implement and maintain monitoring and alerting to support platform reliability and performance.
  • Collaborate with research teams to unlock new features, improve data quality, and accelerate training cycles.

Skills and Qualifications

Minimum qualifications

  • Bachelor’s degree or equivalent experience in computer science, engineering, or similar.
  • Proficiency in at least one backend language (we use Python or Rust).
  • Fluency in distributed compute frameworks such as Apache Spark or Ray.
  • Deep familiarity with cloud infrastructure, data lake architectures, and batch and streaming pipelines.
  • Comfort operating across the stack and owning projects end‑to‑end.
  • Ability to thrive in a highly collaborative environment with many cross‑functional partners and subject‑matter experts.
  • Bias for action with a mindset to take initiative across different stacks and teams where you spot opportunities to ship.

Preferred qualifications

  • Hands‑on experience with Kafka, dbt, Terraform, and Airflow.
  • Experience building a web crawler.
  • Extensive experience understanding and scaling deduplication, data mining, and search.
  • Strong knowledge of file formats and storage systems (e.g., Parquet, Delta Lake) and how they impact performance and scalability.
  • Proactive about documentation, testing, and empowering teammates with good tooling.

Logistics

  • Location: San Francisco, California
  • Compensation: Expected annual salary range $350,000 – $475,000 USD (depending on background, skills, and experience)
  • Visa sponsorship: We sponsor visas and are committed to working through the visa process together for the right fit.
  • Benefits: Generous health, dental, and vision benefits; unlimited PTO; paid parental leave; relocation support as needed.

Thinking Machines offers equal employment opportunities and does not discriminate on the basis of any protected group status under any applicable law.

Requirements

  • Bachelor’s degree or equivalent experience in computer science, engineering, or similar.
  • Proficiency in at least one backend language (we use Python or Rust).
  • Are fluent in distributed compute frameworks such as Apache Spark or Ray.
  • Are deeply familiar with cloud infrastructure, data lake architectures, and batch and streaming pipelines.
  • Comfort operating across the stack and owning projects end-to-end.
  • Thrive in a highly collaborative environment involving many, different cross-functional partners and subject matter experts.
  • A bias for action with a mindset to take initiative to work across different stacks and different teams where you spot the opportunity to make sure something ships.

Responsibilities

  • Design, build, and operate scalable, fault-tolerant infrastructure for LLM Research: distributed compute, data orchestration, and storage across modalities.
  • Develop high-throughput systems for data ingestion, processing, and transformation — including training data catalogs, deduplication, quality checks, and search.
  • Build systems for traceability, reproducibility, and robust quality control at every stage of the data lifecycle.
  • Implement and maintain monitoring and alerting to support platform reliability and performance.
  • Collaborate with research teams to unlock new features, improve data quality, and accelerate training cycles.

Benefits

health insurancedental insurancevision insuranceunlimited PTOpaid parental leaverelocation support

Skills

Apache SparkBeamDelta LakeKafkaPythonRayRust

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free