D
Java Engineer to OpenPipeline Team
Dynatrace
Linz · Hybrid Senior 2w ago
About the role
About
In this role you will develop and optimize data ingestion capabilities within OpenPipeline using Java 21. You’ll work with cross‑functional teams to design reliable, high‑performance services and mentor peers. You’ll tackle complex technical challenges, participate in code reviews, and leverage Dynatrace to monitor and improve component performance. This role sits in an international environment focused on scalable cloud‑native data processing and observability, offering meaningful impact on product delivery and customer experiences.
Responsibilities
- Develop and optimize data ingestion pipelines in OpenPipeline
- Prototype, design, and implement low‑level components in Java 21
- Ensure reliability, resilience, and performance of services
- Troubleshoot and resolve complex technical issues
- Collaborate with product architects and cross‑functional teams on new features
- Mentor and support team members to foster collaboration
- Participate in code reviews, testing, and debugging to maintain high quality
- Take ownership of components and use Dynatrace for performance optimization
- Join an international team to improve delivery, collaboration, and communication
Requirements
- 5+ years of hands‑on Java backend/core development
- Degree in Computer Science or related field
- Familiarity with Apache Kafka
- Cloud platforms (AWS, Azure, GCP) and Kubernetes are a plus
- Strong analytical and problem‑solving skills
- Experience in managing and processing large‑scale data sets
Benefits
- Competitive salary with stock purchase options
- Comprehensive benefits package
- Hybrid work model: 2–3 days in the office
- Contract of employment
- Competitive compensation aligned with qualifications
Requirements
- 5+ years of hands-on Java backend/core development
- Degree in Computer Science or related field
- Familiarity with Apache Kafka
- Strong analytical and problem-solving skills
- Experience in managing and processing large-scale data sets
Responsibilities
- Develop and optimize data ingestion pipelines in OpenPipeline
- Prototype, design, and implement low-level components in Java 21
- Ensure reliability, resilience, and performance of services
- Troubleshoot and resolve complex technical issues
- Collaborate with product architects and cross-functional teams on new features
- Mentor and support team members to foster collaboration
- Participate in code reviews, testing, and debugging to maintain high quality
- Take ownership of components and use Dynatrace for performance optimization
- Join an international team to improve delivery, collaboration, and communication
Benefits
stock purchase optionscomprehensive benefits package
Skills
AWSAzureDockerGCPJavaKubernetesOpenPipelineDynatraceApache Kafka
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free