AC
Informationstechnologie - Data Analyst 100%
avec competénce GmbH
Baden · On-site 4w ago
About the role
Overview
We are seeking a highly skilled Data Analyst with a strong background in functional programming and distributed data processing. This role combines approximately 50 % administrative and coordination-related tasks with 50 % hands‑on technical work, supporting the smooth operation, documentation, and execution of data‑driven initiatives in a collaborative team environment.
Responsibilities
Administrative & Coordination Responsibilities (50 %)
- Lead the team
- Support the organization and coordination of data-related activities across teams.
- Maintain and update documentation for data pipelines, workflows, and system architectures.
- Plan, track and report of data engineering tasks and deliverables.
- Ensure alignment with internal standards, processes, and data governance guidelines.
- Facilitate information flow between technical and non‑technical stakeholders.
- Contribute to continuous improvement of operational processes and tooling.
Technical Responsibilities (50 %)
- Develop and maintain high‑performance, reliable, and distributed applications using functional programming principles.
- Work with Spark in DataBricks ensuring resilience and elasticity in large‑scale deployments.
- Build and optimize data‑intensive workflows using Apache Spark or comparable frameworks.
- Apply solid knowledge of runtime environments, execution contexts, and pure functional design to deliver predictable, testable software.
- Collaborate with teams to design architectural solutions based on sound understanding of distributed and parallel data transformation.
- Write efficient, maintainable code in Python.
Must‑Have Qualifications
- Strong hands‑on experience in functional programming using one or more paradigms:
- ETL pipelines
- Actor model systems (Akka, Apache Pekko)
- Category theory–based systems
- Map‑reduce frameworks
- Real‑world experience with Apache Spark (most important) or similar libraries, technologies (Apache Flink, Cats / Cats Effect, Hadoop, Kafka Streams).
- Solid understanding of runtime systems, execution contexts, and pure functions.
- Deep understanding of compiled, JIT and interpreted execution models.
- Proficiency in Python and in one of Scala/Java, C/C++, or Rust.
- Experience working within project environments, including planning, coordination, or tracking of technical tasks.
- Experience collaborating in multi‑person teams, including task alignment, information sharing, and coordination across roles.
Requirements
- Strong hands-on experience in functional programming using one or more paradigms: ETL pipelines, Actor model systems (Akka, Apache Pekko) or Category theory–based systems or Map-reduce frameworks
- Real-world experience with Apache Spark (most important) or similar libraries, technologies (Apache Flink, Cats / Cats Effect, Hadoop, Kafka Streams)
- Solid understanding of runtime systems, execution contexts, and pure functions
- Deep understanding of compiled, JIT and interpreted execution models
- Proficiency in Python and in one of Scala/Java, C/C++, or Rust
- Experience working within project environments, including planning, coordination, or tracking of technical tasks
- Experience collaborating in multi-person teams, including task alignment, information sharing, and coordination across roles
Responsibilities
- Lead the team
- Support the organization and coordination of data-related activities across teams
- Maintain and update documentation for data pipelines, workflows, and system architectures
- Plan, track and report of data engineering tasks and deliverables
- Ensure alignment with internal standards, processes, and data governance guidelines
- Facilitate information flow between technical and non-technical stakeholders
- Contribute to continuous improvement of operational processes and tooling
- Develop and maintain high-performance, reliable, and distributed applications using functional programming principles
- Work with Spark in DataBricks ensuring resilience and elasticity in large-scale deployments
- Build and optimize data-intensive workflows using Apache Spark or comparable frameworks
- Apply solid knowledge of runtime environments, execution contexts, and pure functional design to deliver predictable, testable software
- Collaborate with teams to design architectural solutions based on sound understanding of distributed and parallel data transformation
- Write efficient, maintainable code in Python
Skills
Apache FlinkApache PekkoApache SparkAkkaCatsCats EffectC++DataBricksETLHadoopJavaKafka StreamsMapReducePythonRustScala
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free