Skip to content
mimi

Senior Software Engineer

Ververica GmbH

flexible Full-time Senior Today

About the role

About the Team:

The Stream Engines & Ecosystem (SEE) team at Ververica owns VERA, an enterprise-grade stream processing engine built on Apache Flink, together with the surrounding ecosystem that makes it production-ready for demanding workloads. Our charter covers the engine runtime itself as well as the components customers rely on to integrate VERA into real-world data platforms: connectors, catalogs, change data capture (CDC), and AI model integration.

Role Overview:

We are looking for a Senior Software Engineer to contribute to the design, implementation, and long-term evolution of the VERA engine and its ecosystem. The primary focus is on engine kernel development, complemented by meaningful work across the broader ecosystem. You will take end-to-end ownership of non-trivial technical problems — from design proposals through implementation, testing, release, and production hardening.

Responsibilities:

  • Design and implement features and improvements in the VERA engine kernel, including areas such as runtime execution, state management, checkpointing, scheduling, fault tolerance, and SQL/Table API.
  • Contribute to ecosystem components including connectors, catalogs, CDC pipelines, and AI model integration.
  • Diagnose and resolve correctness, performance, and stability issues in distributed production environments.
  • Produce and review technical design documents; participate in architectural discussions for both kernel and ecosystem initiatives.
  • Uphold engineering quality through code review, thorough testing, and rigorous performance and regression analysis.
  • Collaborate with adjacent teams across platform, SRE, and product functions.

Requirements

  • Strong proficiency in Java, with a solid understanding of the JVM, concurrency, memory model, and performance tuning.
  • Working knowledge of distributed systems fundamentals: consensus, replication, consistency models, fault tolerance, and failure recovery.
  • Demonstrated ability to reason about and debug complex distributed data systems under production conditions.
  • Experience designing and implementing non-trivial systems-level software, with a track record of shipping and maintaining production code.
  • Familiarity with stream processing concepts such as event time, watermarks, windowing, exactly-once semantics, and state backends.
  • Ability to produce clear design documents and collaborate effectively in an asynchronous, remote environment.

Preferred Qualifications:

  • Contributions to Apache Flink or comparable open-source projects in the streaming, messaging, or data infrastructure space (e.g., Kafka, Pulsar, Spark, Iceberg, Paimon).
  • Committer or PMC status on a relevant Apache project.
  • Hands-on experience with the internals of Flink or a similar distributed stream processing engine.
  • Experience building or maintaining connectors, catalog integrations, or CDC pipelines.
  • Familiarity with integrating AI/ML model serving or inference into data processing pipelines.
  • Experience with storage formats and lakehouse technologies (Parquet, ORC, Iceberg, Hudi, Paimon, Delta Lake).

Skills

JavaJVMApache FlinkKafkaPulsarSparkIcebergPaimonParquetORCHudiDelta Lake

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free