Skip to content
mimi

Senior Data Engineer

X Consulting

Remote · Nigeria Full-time Mid Level ₦2500k – ₦4000k/mo 2w ago

About the role

Role Summary

The Data Engineer for Experimentation is responsible for building and maintaining high-quality data pipelines and datasets that support experimentation (such as A/B testing) and analytics, working primarily within cloud environments like Amazon Web Services; the role focuses on transforming raw data into reliable, analysis‑ready formats, enabling teams to make data‑driven decisions, and ensuring data systems are scalable, well‑documented, and optimized for performance rather than managing infrastructure.

Key Responsibilities

  • Design, build, and maintain scalable data pipelines and workflows in Amazon Web Services to support experimentation datasets
  • Transform, clean, and model raw data into structured, analysis‑ready datasets
  • Standardize data using best practices in data modelling and governance
  • Collaborate with data scientists, analysts, and product teams to translate business needs into data solutions
  • Develop and optimize SQL‑based ETL processes for performance and reliability
  • Ensure data quality through testing, validation, and monitoring frameworks
  • Document data flows, schemas, and pipeline logic for transparency and maintainability
  • Design and support experiment tracking systems and metrics (e.g., A/B testing data)
  • Improve data standards, naming conventions, and reusable frameworks across teams
  • Mentor and guide colleagues on data modeling and pipeline best practices

Requirements

  • 3+ years of experience in Data Engineering or Analytics Engineering roles
  • Advanced SQL skills for complex queries, transformations, and data modelling
  • Strong knowledge of data modelling techniques (e.g., star schema, 3NF, entity‑relationship modelling, medallion architecture)
  • Proven experience building and maintaining data pipelines in Amazon Web Services (e.g., Redshift, S3, Glue, Lambda, Step Functions)
  • Experience with data tools such as DBT, Airflow, or similar orchestration frameworks
  • Ability to translate business problems into scalable data solutions
  • Strong analytical and problem‑solving skills
  • Good communication skills for working with both technical and non‑technical stakeholders
  • Familiarity with experimentation frameworks (e.g., A/B testing, event tracking) is an added advantage

Additional Information

  • Location: Latvia (remote work possible)
  • The hiring company is open to relocating the potential candidate to this country.
  • The company is willing to help obtain a visa if there is a match and you are really interested in the role.

Note: Only qualified candidates will be contacted.

Job Type: Full‑time

Pay: ₦2,500,000.00 – ₦4,000,000.00 per month

Application Questions

  • Do you have experience working within a digital product experimentation environment?
  • This role is resident in Latvia. Do you have a Schengen Travel Visa? If no, are you willing to relocate to Latvia?

Experience:

  • Data Engineering: 3 years (Required)

Work Location: Remote

Requirements

  • Advanced SQL skills for complex queries, transformations, and data modelling
  • Strong knowledge of data modelling techniques (e.g., star schema, 3NF, entity-relationship modelling, medallion architecture)
  • Proven experience building and maintaining data pipelines in Amazon Web Services (e.g., Redshift, S3, Glue, Lambda, Step Functions)
  • Experience with data tools such as DBT, Airflow, or similar orchestration frameworks
  • Ability to translate business problems into scalable data solutions
  • Strong analytical and problem-solving skills
  • Good communication skills for working with both technical and non-technical stakeholders
  • Familiarity with experimentation frameworks (e.g., A/B testing, event tracking) is an added advantage

Responsibilities

  • Design, build, and maintain scalable data pipelines and workflows in Amazon Web Services to support experimentation datasets
  • Transform, clean, and model raw data into structured, analysis-ready datasets
  • Standardize data using best practices in data modelling and governance
  • Collaborate with data scientists, analysts, and product teams to translate business needs into data solutions
  • Develop and optimize SQL-based ETL processes for performance and reliability
  • Ensure data quality through testing, validation, and monitoring frameworks
  • Document data flows, schemas, and pipeline logic for transparency and maintainability
  • Design and support experiment tracking systems and metrics (e.g., A/B testing data)
  • Improve data standards, naming conventions, and reusable frameworks across teams
  • Mentor and guide colleagues on data modeling and pipeline best practices

Skills

AWS GlueAWS LambdaAWS RedshiftAWS S3AWS Step FunctionsAirflowDBTETLEntity-relationship modellingExperimentation frameworksMedallion architectureSQLStar schema3NF

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free