CG
Data Engineer
comparis Gruppe
Remote (Global) Full-time Mid Level Today
About the role
About us
For almost 30 years, comparis.ch has been Switzerland’s leading comparison platform. We have been comparing prices and services from health insurers, insurance providers, banks, and telecom companies, among others, and we operate Switzerland’s largest online marketplaces for real estate and cars. Through comprehensive comparisons, we create transparency and help our users make the right decisions for their needs. With more than 80 million visitors per year, we are one of the most widely used websites in Switzerland. Nine out of ten people know us as Switzerland’s independent comparison platform.
Responsibilities
- Design, implement and continuously optimize platform services on our modern industry standard GCP based data environment
- Interact with our software architects/engineers, data scientists and business analysts in an agile environment to define data needs & provide solutions
- Develop a high‑quality code base for our data pipelines
- Develop, provision and monitor our data products
- Maintain our data pipeline orchestration, datalake and data warehouse
- …room for own data & engineering initiatives
Requirements
- Knowledgeable and experienced in SQL programming and dbt
- At least 3 years of experience in a similar data science/engineering environment with a strong track record of data projects on Google Cloud Platform (GCP)
- Experience with GCP native services like BigQuery, Cloud Run, Dataproc, and Dataflow
- Experience in developing high‑quality software incl. unit & integration tests in one or more languages (such as Python or Java) while leveraging CI/CD tools and Git
- Experience with managing data pipeline workflows with Airflow
- Experience working with containerization (Docker) and infrastructure‑as‑code frameworks (Terraform)
- Having worked with distributed computing technologies like Apache Spark and knowledge of ML concepts/frameworks is a strong plus
- Open for new technologies and challenges
Nice to Have
- Experience with distributed computing technologies like Apache Spark and knowledge of ML concepts/frameworks
Benefits
- Chance to join – and shape – a growing Data Team
- Highly motivated team of diverse experts
- Build AI‑driven data products to offer users a highly relevant customer experience
- Provide everyone in the organization with the necessary tools and knowledge to make better, more informed decisions
- Work with agile methods in interdisciplinary teams and highly value autonomy
- Position can be filled remotely
Requirements
- knowledgeable and experienced in SQL programming and dbt
- At least 3 years of experience in a similar data science/engineering environment with a strong track record of data projects on Google Cloud Platform (GCP)
- Experience in developing high-quality software incl. unit & integration tests in one or more languages (such as Python or Java) while leveraging CI/CD tools and Git.
- You have experience with managing data pipeline workflows with Airflow
- Experience working with containerization (Docker) and infrastructure as a code frameworks (Terraform)
Responsibilities
- Design, implement and continuously optimize platform services on our modern industry standard GCP based data environment
- Interacting with our software architects/engineers, data scientists and business analysts in an agile environment to define data needs & provide solutions
- Developing a high-quality code base for our data pipelines
- Development, provisioning and monitoring of our data products
- Maintenance of our data pipeline orchestration, datalake and data warehouse
- room for own data & engineering initiatives
Skills
AirflowBigQueryCloud RunDataflowDataprocdbtDockerGCPGitJavaMLPythonSparkSQLTerraform
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free