Senior Data Platform Engineer
Mistplay
About the role
Mistplay is the #1 loyalty app for mobile gamers. Our community of millions of engaged mobile gamers come to Mistplay to discover new games to play and earn rewards. Gamers are rewarded for their time and money spent within the games and can redeem those rewards for gift cards. Mistplay is on a mission to be the best way to play mobile games for everyone everywhere! Download Mistplay on the Google Play Store here and follow us on Instagram, Twitter and Facebook.
📍 Please Note: In Canada 🇨🇦, Mistplay follows a 2 days/week in-office hybrid model in Toronto (400 University Ave) & Montreal (1001 Blvd. Robert-Bourassa)
Role Overview
Reporting to the Director of Data Platform, the Senior Data Platform Engineer is responsible for building and operating the core systems that enable reliable, scalable, and high-velocity data access and analytics across Mistplay.
This role is not about analysis - it is about engineering data systems at scale. You will own significant platform components, contribute to technical direction, and apply best practices that enable data to move from raw ingestion to trusted, high-quality insights that drive real-time business impact.
You will operate as a strong individual contributor across teams, partnering with Data Science, ML Platform, and Backend to reduce data latency, increase analytical throughput, and improve data trust across the full data lifecycle.
What You'll Do
Be a key contributor to designing, building, and operating:
- Ingestion & Pipeline Infrastructure - build and maintain scalable, reliable ingestion systems for batch and streaming data sources; implement schema evolution, data contracts, and end-to-end lineage; contribute to compute and cost optimization across diverse workloads.
- Data Warehouse & Lakehouse Architecture - implement and evolve the core analytical data platform (warehouse, lakehouse, or hybrid); apply storage layer strategies, partitioning, and access patterns; contribute to data modeling standards, performance tuning, and cost efficiency.
- Transformation & Orchestration Layer - build and maintain scalable, maintainable transformation pipelines (e.g., dbt, Spark); implement orchestration and dependency management; enforce data quality contracts and testing frameworks across the transformation layer.
- Data Serving & Access Layer - implement low-latency data access systems for analytical and operational consumers; apply caching, materialization, and API strategies; contribute to SLAs on freshness, consistency, and query performance.
- Observability & Data Quality - implement data quality monitoring, anomaly detection, and freshness checks; contribute to data SLO definitions and operational practices; participate in incident response and postmortems for data reliability.
- Data Catalog & Discoverability - contribute to metadata management systems; drive data discoverability, ownership, and documentation standards within your domain; support self-serve access to trusted, well-understood data assets.
- Platform Tooling & Evolution - evaluate and integrate data platform components (e.g. Spark, dbt, Airflow, Kafka, data catalogs); contribute to migrations and platform improvements with minimal disruption to downstream consumers.
What You'll Bring
- Data Platform Experience - 5+ years building and operating production data platforms; proven ownership of components within large-scale systems supporting real-time or near real-time data access and analytical workloads.
- Software Engineering - strong proficiency in Python, Scala, or Go; track record of building and evolving distributed data systems with high reliability, maintainability, and strong engineering standards.
- Data Warehousing & Lakehouse - solid expertise in modern data warehouse and lakehouse architectures (e.g., Snowflake, BigQuery, Databricks, Delta Lake, Iceberg); experience building and optimizing analytical systems at scale.
- Streaming & Batch Pipelines - strong experience designing and operating data pipelines; solid understanding of streaming systems (e.g. Kafka, Flink) and batch frameworks (e.g. Spark, dbt) with awareness of trade-offs across latency, throughput, and cost.
- Data Modeling & Transformation - ability to apply and contribute to data modeling standards across diverse consumer needs; experience working within transformation frameworks and testing practices across engineering and analytics teams.
- Observability & Operations - solid operational rigor across data systems (metrics, logs, data quality alerts); experience contributing to SLO definitions, cost optimization, and incident response for data reliability.
- Technical Growth (Senior Level) - actively participates in design reviews and architectural discussions; mentors teammates; demonstrates ownership of complex platform components; shows clear trajectory toward setting broader technical direction.
- Collaboration & Influence - works effectively across Data Science, ML Platform, Analytics, DevOps, and Backend; communicates technical trade-offs clearly; translates requirements into well-scoped, executable platform work.
Why Mistplay?
We strive to make our work environment as inviting and fun as possible! Working at Mistplay is coupled with a whole array of perks that we've adopted virtually and in-person: Team Lunches, game nights, company-wide events, and so much more. Our culture is deeply rooted in growth and upheld by a team of smart, dynamic, and enthusiastic people. We utilize data to constantly learn, improve, and adapt. We foster an environment where everyone is encouraged to share their ideas, push boundaries, take calculated risks, and witness their visions come to life.
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free