Kafka Big Data Engineer
PBT Group
About the role
PBT Group has an opportunity for a Kafka Big Data Engineer. As a Kafka Engineer, you will be responsible for the building, improvement and scaling of our streaming data platform. This role requires a combination of strong technical skills, deep understanding of distributed systems as well as excellent communication abilities.
Kafka engineer is a big data engineer who specializes in developing and managing Kafka-based data pipelines. Kafka is a distributed streaming platform that can be used to build real-time data pipelines and streaming applications. As a Kafka engineer, you will be responsible for developing and managing Kafka-based data pipelines. You will also be required to work with other big data technologies such as Hadoop, Spark, and Storm.
Duties: • Design, develop, and manage Kafka-based data pipelines. • Work with other big data technologies such as Hadoop, Spark, and Storm. • Monitor and optimize Kafka clusters. • Troubleshoot Kafka related issues. • Handle customer queries and support.
Skills and Experience Required: • 3+ years of experience in big data or related field. • Strong knowledge of Kafka and other big data technologies. • Good programming skills in Python. • Good understanding of distributed systems. • Good communication and interpersonal skills.
Qualifications/ Certification: • Tech/BE/M.Tech in Computer Science or related field
* In order to comply with the POPI Act, for future career opportunities, we require your permission to maintain your personal details on our database. By completing and returning this form you give PBT your consent
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free