JP
Data Engineer - Information Technology (Banking experience)
Joblink Placement
Sandton · Hybrid Full-time 2w ago
About the role
Role Overview
A Data Engineer designs, builds, and maintains scalable data pipelines and platforms that enable reliable data ingestion, storage, processing, and access. The role focuses on transforming raw data into high-quality, trusted datasets that support analytics, reporting, and data science use cases.
Key Responsibilities
- Design, develop, and maintain data pipelines (batch and/or streaming)
- Build and optimize data integration processes from multiple data sources
- Develop and manage data models for analytics and reporting
- Ensure data quality, accuracy, and reliability through validation and monitoring
- Implement and maintain ETL/ELT workflows
- Optimize data storage and query performance
- Collaborate with data analysts, data scientists, and business stakeholders
- Maintain documentation for data pipelines, schemas, and processes
- Enforce data governance, security, and compliance standards
- Troubleshoot and resolve data-related issues in production environments
- Support platform scalability, resilience, and cost optimization
Required Skills & Experience
Technical Skills
- Strong proficiency in SQL
- Experience with Python, Scala, or Java
- Hands-on experience with data warehouses (e.g. Snowflake, BigQuery, Redshift, Synapse)
- Experience with ETL/ELT tools (e.g. Airflow, dbt, Azure Data Factory, Informatica)
- Knowledge of cloud platforms (Azure, AWS, or GCP)
- Understanding of data modeling (star schema, snowflake, dimensional modeling)
- Experience with version control (Git)
- Familiarity with CI/CD pipelines for data workloads
Data & Platform Knowledge
- Relational and NoSQL databases
- File-based data formats (Parquet, Avro, JSON, CSV)
- Data streaming concepts (Kafka, Event Hubs, Kinesis - advantage)
- Performance tuning and query optimization
Soft Skills
- Strong analytical and problem-solving abilities
- Ability to work independently and in cross-functional teams
- Clear communication with technical and non-technical stakeholders
- Attention to detail
Job Features
- Job Category: Banking, IT
- Salary: Depending on experience
- Commencement Date: ASAP
- Location: Sandton
- Contact Type: Contract
- Duration: 12 months with strong possibilities of renewal, based on business needs
- Work Arrangement: Onsite for a minimum of 3 days with the other 2 days offsite – but at management’s discretion
Contact Person
Amori Prinsloo
Website
[Website Link]
Apply For This Job
- Name*
- Email*
- Phone*
- Cover Letter*
- Attach Resume*
By using this form you are agreeing to the storage and handling of your data by this website.
Submit Close
© Copyright - Joblink Placement
Requirements
- Strong proficiency in SQL
- Experience with Python, Scala, or Java
- Hands-on experience with data warehouses (e.g. Snowflake, BigQuery, Redshift, Synapse)
- Experience with ETL/ELT tools (e.g. Airflow, dbt, Azure Data Factory, Informatica)
- Knowledge of cloud platforms (Azure, AWS, or GCP)
- Understanding of data modeling (star schema, snowflake, dimensional modeling)
- Experience with version control (Git)
- Familiarity with CI/CD pipelines for data workloads
- Relational and NoSQL databases
- File-based data formats (Parquet, Avro, JSON, CSV)
- Data streaming concepts (Kafka, Event Hubs, Kinesis - advantage)
- Performance tuning and query optimization
- Strong analytical and problem-solving abilities
- Ability to work independently and in cross-functional teams
- Clear communication with technical and non-technical stakeholders
- Attention to detail
Responsibilities
- Design, develop, and maintain data pipelines (batch and/or streaming)
- Build and optimize data integration processes from multiple data sources
- Develop and manage data models for analytics and reporting
- Ensure data quality, accuracy, and reliability through validation and monitoring
- Implement and maintain ETL/ELT workflows
- Optimize data storage and query performance
- Collaborate with data analysts, data scientists, and business stakeholders
- Maintain documentation for data pipelines, schemas, and processes
- Enforce data governance, security, and compliance standards
- Troubleshoot and resolve data-related issues in production environments
- Support platform scalability, resilience, and cost optimization
Skills
AWSAzureBigQueryCSVdbtDockerGCPGitInformaticaJSONJavaKafkaNoSQLParquetPythonRedshiftRelational databasesScalaSnowflakeSQLSynapseAvroAirflowAzure Data FactoryEvent HubsKinesis
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free