AWS Cloud Engineer V
Innovative Information Technologies, Inc.
About the role
What You'll Do
Kafka Platform Engineer V (AWS Cloud & Streaming Platforms)
The Kafka Platform Engineer V will play a key role in defining, building, and operating our real-time data streaming platform on AWS. This is a hands-on engineering role responsible for Kafka platform strategy, architecture, and operations, while also contributing across broader cloud platforms including machine learning (SageMaker) and data warehousing (Snowflake). The ideal candidate has deep experience with event-driven architectures, strong AWS platform knowledge, and a proven ability to design, build, and operate scalable, production-grade data platforms.
DUTIES AND RESPONSIBILITIES:
- Hands-on platform engineering role, leading the design, build, and operation of Kafka-based streaming platforms (Amazon MSK and Kafka ecosystem).
- Define and execute Kafka platform roadmap, architecture, and best practices aligned with enterprise data strategy.
- Build and manage real-time data pipelines using Kafka, Kafka Connect, and Schema Registry.
- Implement Infrastructure as Code (IaC) using Terraform to provision and manage Kafka clusters and supporting AWS infrastructure.
- Ensure high availability, scalability, and multi-region resiliency of Kafka environments.
- Monitor and optimize platform performance using tools such as Lenses, Prometheus, and cloud-native observability solutions.
- Provide hands-on support and troubleshooting for Kafka clusters, streaming pipelines, and integrations.
- Collaborate with application, data engineering, and analytics teams to enable event-driven integration patterns.
- Support and contribute to AWS-based platforms, including:
- SageMaker for ML workflows and model lifecycle support
- Snowflake for data ingestion and streaming integrations
- Drive security, governance, and compliance across streaming and cloud platforms.
- Create and maintain technical documentation, standards, and runbooks.
- Mentor junior engineers and promote engineering best practices across the team.
MINIMUM KNOWLEDGE, SKILLS AND ABILITIES REQUIRED:
- Bachelor's degree in Computer Science, Information Systems, or related field (or equivalent experience)
- 8+ years of experience in data engineering, platform engineering, or cloud infrastructure
- Strong hands-on experience with:
- Apache Kafka (cluster administration, topics, partitions, replication)
- Amazon MSK (Managed Streaming for Kafka)
- Kafka Connect and Schema Registry
- Proven experience with event-driven architecture and real-time streaming systems
- Strong experience with Terraform (Infrastructure as Code) for AWS
- Hands-on experience with AWS services (EC2, VPC, IAM, S3, CloudWatch, etc.)
- Experience supporting production-grade distributed systems with high availability requirements
- Strong troubleshooting and performance tuning skills
- Experience with CI/CD pipelines and DevOps practices
PREFERRED SKILLS:
- Experience with multi-region Kafka deployments and disaster recovery strategies
- Exposure to SageMaker (ML pipelines, model deployment)
- Experience integrating Kafka with Snowflake or other OLAP/analytical platforms
- Experience with container platforms (Kubernetes/EKS)
- Familiarity with monitoring tools (Prometheus, Grafana, Dynatrace, etc.)
- Scripting experience (Python, Bash)
- Experience with Lenses or similar Kafka management tools
- AWS certifications (Solutions Architect, DevOps Engineer) preferred
KEY CHARACTERISTICS:
- Strong hands-on engineer, not just architect
- Ability to operate independently and take ownership of platform
- Comfortable working in fast-paced, evolving cloud environments
- Strong communication and collaboration skills
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free