Skip to content
mimi

Senior Data Engineer

BHFT

Remote (Global) Senior Today

About the role

Key Responsibilities

  • Ingestion&Pipelines: Architect batchstream pipelines (Airflow Kafka dbt) for diverse structured and unstructured marked data. Provide reusable SDKs in Python and Go for internal data producers.
  • Storage&Modeling: Implement and tune S3 columnoriented and timeseries data storage for petabytescale analytics; own partitioning compression TTL versioning and cost optimisation.
  • Tooling & Libraries: Develop internal libraries for schema management data contracts validation and lineage; contribute to shared libraries and services for internal data consumers for research backtesting and real-time trading purposes.
  • Reliability & Observability: Embed monitoring alerting SLAs SLOs and CI/CD; champion automated testing data quality dashboards and incident runbooks.
  • Collaboration: Partner with Data Science QuantResearch Backend and DevOps to translate requirements into platform capabilities and evangelise best practices.

Qualifications

Required Skills & Experience

  • 7years building productiongrade data systems.
  • Familiarity with market data formats (e.g. MDP ITCH FIX proprietary exchange APIs) and market data providers.
  • Expertlevel Python (Go and C nice to have).
  • Handson with modern orchestration (Airflow) and event streams (Kafka).
  • Strong SQL proficiency: aggregations joins subqueries window functions (first last candle histogram) indexes query planning and optimization.
  • Designing highthroughput APIs (REST/gRPC) and data access libraries.
  • Strong Linux fundamentals containers (Docker) and cloud object storage (AWSS3 / GCS).
  • Proven track record of mentoring code reviews and driving engineering excellence.

Additional Information

What we offer:

  • Working in a modern international technology company without bureaucracy legacy systems or technical debt.
  • Excellent opportunities for professional growth and self-realization.
  • We work remotely from anywhere in the world with a flexible schedule.
  • We offer compensation for health insurance sports activities and professional training.

Remote Work

Yes

Employment Type

Full-time

Requirements

  • Familiarity with market data formats (e.g. MDP ITCH FIX proprietary exchange APIs) and market data providers.
  • Designing highthroughput APIs (REST/gRPC) and data access libraries.
  • Strong Linux fundamentals containers (Docker) and cloud object storage (AWSS3 / GCS).
  • Proven track record of mentoring code reviews and driving engineering excellence.

Responsibilities

  • Architect batchstream pipelines (Airflow Kafka dbt) for diverse structured and unstructured marked data.
  • Provide reusable SDKs in Python and Go for internal data producers.
  • Implement and tune S3 columnoriented and timeseries data storage for petabytescale analytics; own partitioning compression TTL versioning and cost optimisation.
  • Develop internal libraries for schema management data contracts validation and lineage; contribute to shared libraries and services for internal data consumers for research backtesting and real-time trading purposes.
  • Embed monitoring alerting SLAs SLOs and CI/CD; champion automated testing data quality dashboards and incident runbooks.
  • Partner with Data Science QuantResearch Backend and DevOps to translate requirements into platform capabilities and evangelise best practices.

Benefits

health insurancesports activitiesprofessional training

Skills

AirflowAWS S3CdbtDockerGCSGogRPCKafkaLinuxPythonRESTS3SQL

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free