Skip to content
mimi

Data / ML Engineer

Powerfleet

South Africa · On-site Full-time Senior Yesterday

About the role

About Powerfleet

Powerfleet (Nasdaq: AIOT; JSE: PWR) is a global leader in the artificial intelligence of things (AIo T) software-as-a-service (Saa S) mobile asset industry. With more than 30 years of experience, Powerfleet unifies business operations through the ingestion, harmonization, and integration of data—regardless of source—and delivers actionable insights to help companies meet their strategic objectives around Safety, Compliance, Efficiency and Sustainability. Our people-first culture and relentless innovation empower customers to achieve measurable, sustainable business improvements. Powerfleet serves over 2.6 million subscribers across more than 48,000 customers in 120 countries, with commercial operations across every major continent.

We’re looking for a seasoned Data / ML Engineer to join our Data Systems team. Someone who thrives on building robust, scalable data infrastructure and is comfortable bridging the gap between raw telemetry and production-grade analytical products.

What You’ll Do

  • Design, build, and maintain high-throughput ETL/ELT pipelines handling large volumes of real-time and batch telematics data
  • Architect and evolve our data lake and warehouse infrastructure for reliability, scalability, and cost efficiency
  • Build and maintain stream processing systems for low-latency data ingestion and transformation
  • Collaborate with data scientists to operationalise ML models and integrate outputs into data products
  • Own data quality, observability, and governance across the platform
  • Contribute to architectural decisions across cloud infrastructure, storage, and compute layers

Qualifications

  • Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related field with 8+ years of relevant experience, OR
  • Master’s degree in a relevant field with 5+ years of experience

Core Competencies

  • Big Data & Processing: Advanced proficiency with Apache Spark (Py Spark preferred); experience with high-volume batch and stream processing
  • Streaming: Hands‑on experience with Apache Kafka or equivalent event‑streaming platforms; familiarity with event‑driven architectures and real‑time streaming (e.g. Azure Event Hubs)
  • Data Lake & Lakehouse: Proven experience designing and hydrating data lakes and data warehouses; familiarity with open table formats (Apache Hudi, Delta Lake, or Iceberg)
  • Databases: Strong working knowledge of Postgre SQL, MS SQL, Snowflake, and cloud‑native databases (e.g. AWS Aurora, Redshift, Dynamo DB); expertise in data modelling, performance tuning, and warehousing methodologies (Kimball or Inmon)
  • Languages: Python (primary); C# or Java (required); strong SQL; shell scripting (Bash/Power Shell) for automation; REST API development
  • Cloud — AWS: Solid experience with AWS (S3, Glue, EMR, Lambda, Redshift, Step Functions, Lake Formation); multi‑cloud exposure is a plus
  • Dev Ops & Tooling: CI/CD practices using Azure Dev Ops or Git Hub Actions; Infrastructure-as-Code with Terraform and/or Cloud Formation; database deployment automation with Liquibase
  • Containerisation & Orchestration: Hands‑on experience with Docker, Kubernetes, and Apache Airflow for workflow orchestration
  • Monitoring & Observability: Experience implementing monitoring, logging, and alerting for data systems using AWS Cloud Watch, Open Telemetry, or equivalent
  • Data Governance & Security: Working knowledge of RBAC and IAM policies, data encryption, data lineage, and compliance best practices
  • AI & Automation: Experience building or integrating automated ML pipelines and data workflows; familiarity with AI‑assisted tooling, LLM integration patterns, or agentic data processing is a strong plus

Advantageous

  • Experience with MLflow, Sage Maker, Azure ML, or similar MLOps tooling
  • Knowledge of data mesh or data product patterns
  • Azure cloud familiarity (Databricks, ADLS, Azure SQL, Event Hubs)
  • Telematics, Io T, warehouse/WMS, or time‑series data experience

What We’re Looking For

  • Someone who takes end‑to‑end ownership, from pipeline design to production reliability
  • Communicates technical trade‑offs clearly to both engineering and non‑technical stakeholders
  • Stays current with the data engineering landscape and applies new tools pragmatically
  • Works well in a collaborative, cross‑functional environment
  • Has a track record of delivering at scale, not just in proof‑of‑concept
  • Brings a quality mindset and thinks about correctness, observability, and maintainability from day one

Equal Employment Opportunity Statement

Powerfleet is committed to maintaining a diverse, equitable, and inclusive workplace where all individuals are treated with dignity and respect. Employment decisions are based on qualifications, merit, and business needs. We do not discriminate or tolerate harassment on any protected basis under applicable laws in the countries where we operate, including characteristi

Skills

Apache AirflowApache HudiApache KafkaApache SparkAWS AuroraAWS Cloud WatchAWS EMRAWS GlueAWS Lake FormationAWS LambdaAWS RedshiftAWS S3AWS Step FunctionsBashC#DockerDelta LakeIcebergInfrastructure-as-CodeJavaKubernetesMS SQLMLflowOpen TelemetryPostgreSQLPowerShellPythonREST APISageMakerSnowflakeSQLTerraformAWS DynamoDBAzure DatabricksAzure Event HubsAzure MLAzure SQLADLS

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free