SG
Data Engineer - PySpark
SuisseCo GmbH
Zürich · Hybrid Senior 1w ago
About the role
About
- SuisseCo specializes in the international recruitment and placement of highly qualified IT specialists in Switzerland. We support our clients in delivering complex IT transformation initiatives and provide fast, flexible, and high-quality staffing solutions.
Details
- Start Date: 01st of April 2026
- Duration: 1 year contract (plus possibility for extension)
- Location: Zürich
- Work function: Hybrid
- Work load: 100%
Role Overview
A major transformation program requires the support of an experienced Senior Data Engineer. You will play a key role in designing and implementing production-ready data pipelines on a modern cloud-based data platform. Working closely with business analysts, data engineers, and solution architects, you will ensure that both functional and non-functional requirements are met to a high engineering standard.
Key Responsibilities
- Design and implement production-grade data pipelines using PySpark on Databricks
- Translate business requirements and specifications into scalable technical solutions
- Collaborate with data engineers and solution architects across the program
- Optimize Spark workloads and ensure performance, reliability, and maintainability
- Apply strong software engineering principles, design patterns, and best practices
Essential Skills & Experience (Top 3)
- PySpark (strong Python engineering skills required)
- Databricks & Delta Lake
- Azure fundamentals
Required Qualifications
- Minimum 5 years of experience as a software or data engineer working on complex systems
- 3+ years of hands‑on experience with Apache Spark (PySpark)
- Strong Python software engineering backgroundExperience developing Spark applications using IDEs such as VS Code or PyCharm
- Proven experience working with Delta Lake, including performance optimization
- Solid understanding of relational data models and SQL
- Experience with high-level programming languages such as Python, Java, or C#
- Strong analytical skills with the ability to break down complex problems into manageable tasks
Requirements
- Minimum 5 years of experience as a software or data engineer working on complex systems
- 3+ years of hands-on experience with Apache Spark (PySpark)
- Strong Python software engineering background
- Experience developing Spark applications using IDEs such as VS Code or PyCharm
- Proven experience working with Delta Lake, including performance optimization
- Solid understanding of relational data models and SQL
- Experience with high-level programming languages such as Python, Java, or C#
- Strong analytical skills with the ability to break down complex problems into manageable tasks
Responsibilities
- Design and implement production-grade data pipelines using PySpark on Databricks
- Translate business requirements and specifications into scalable technical solutions
- Collaborate with data engineers and solution architects across the program
- Optimize Spark workloads and ensure performance, reliability, and maintainability
- Apply strong software engineering principles, design patterns, and best practices
Skills
AzureDatabricksDelta LakeJavaPySparkPythonSQL
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free