LT
Lead Software Engineer
Lorven Technologies Inc.
Remote · Canada Contract Lead Today
About the role
What you’ll do
- Lead the migration of large-scale logs and distributed traces from existing Analytics Database to ClickHouse
- Design scalable ClickHouse schemas optimized for high-ingestion telemetry workloads
- Architect ingestion pipelines for logs and trace data
- Translate existing queries and data models into performant ClickHouse equivalents
- Develop and execute backfill and cutover strategies to ensure seamless migration
- Optimize query performance, partitioning, storage layout, and retention strategies
- Benchmark and validate performance under production-scale workloads
- Establish monitoring, alerting, and operational best practices for ClickHouse
- Design and support alerting workflows powered by ClickHouse queries
- Document architecture decisions, tuning approaches, and operational runbooks
- Partner with platform and SRE teams to ensure production readiness
Responsibilities
- Design and tune distributed ClickHouse clusters (sharding, replication, partitioning)
- Configure and optimize MergeTree engines, TTL policies, indexing, and compression
- Improve ingestion throughput and query latency at scale
- Deploy and operate ClickHouse workloads in Kubernetes environments
- Implement backup and restore strategies
- Identify and resolve performance bottlenecks
- Ensure reliability, scalability, and maintainability of the platform
- Support security and production-readiness reviews
Basic Qualifications
- 12+ years of experience in backend, infrastructure, or data platform engineering
- Hands-on production experience with ClickHouse
- Strong experience with Terraform and Helm charts
- Experience deploying and operating stateful systems on Kubernetes
- Strong understanding of distributed systems and high-throughput ingestion architectures
- Experience designing schemas for large-scale analytical or telemetry workloads
- Strong SQL skills and performance tuning expertise
- Experience migrating data between large-scale analytical systems
- Familiarity with at least one major cloud provider: Azure, GCP, or AWS
Preferred Qualifications
- Experience with Azure Data Explorer (Kusto) or Prometheus
- Experience with OpenTelemetry logs and traces
- Experience with Kafka or Azure Event Hub ingestion pipelines
- Experience handling high-cardinality telemetry datasets
- Experience in observability or monitoring platforms
- Experience working with billion-row-scale datasets
- Experience with cost optimization of analytical workloads
- Familiarity with ClickHouse Enterprise features
Skills
AWSAzureClickHouseGCPHelm chartsKubernetesSQLTerraform
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free