Skip to content
mimi

Apptad-Data Engineer with OpenShift

Apptad Inc

Mississauga · Hybrid Full-time Today

About the role

Job Title

Data Engineer with OpenShift

Job Location

Mississauga, ON (Need Onsite day 1, hybrid 3 days from office)

Job Duration

Long Term

Our challenge

We are seeking a skilled Data Engineer with expertise in Databricks, Snowflake, Python, Pyspark, SQL, and Release Management to join our dynamic team. The ideal candidate will have a strong background in the banking domain and will be responsible for designing, developing, and maintaining robust data pipelines and systems to support our banking operations and analytics.

Responsibilities

  • Design, develop, and maintain scalable and efficient data pipelines using Snowflake, Pyspark, and SQL.
  • Write optimized and complex SQL queries to extract, transform, and load data.
  • Develop and implement data models, schemas, and architecture that support banking domain requirements.
  • Collaborate with data analysts, data scientists, and business stakeholders to gather data requirements.
  • Automate data workflows and ensure data quality, accuracy, and integrity.
  • Manage and coordinate release processes for data pipelines and analytics solutions.
  • Monitor, troubleshoot, and optimize the performance of data systems.
  • Ensure compliance with data governance, security, and privacy standards within the banking domain.
  • Maintain documentation of data architecture, pipelines, and processes.
  • Stay updated with the latest industry trends and incorporate best practices.

Requirements

  • Proven experience as a Data Engineer or in a similar role with a focus on Snowflake, Python, Pyspark, and SQL.
  • Strong understanding of data warehousing concepts and cloud data platforms, especially Snowflake.
  • Experience with Openshift
  • Hands‑on experience with release management, deployment, and version control practices.
  • Solid understanding of banking and financial services industry data and compliance requirements.
  • Proficiency in Python scripting and Pyspark for data processing and automation.
  • Experience with ETL/ELT processes and tools.
  • Knowledge of data governance, security, and privacy standards.
  • Excellent problem‑solving and analytical skills.
  • Strong communication and collaboration abilities.

Preferred, but not required

  • Good Knowledge in Azure and Databricks in highly preferred.
  • Knowledge of Apache Kafka or other streaming technologies.
  • Familiarity with DevOps practices and CI/CD pipelines.
  • Prior experience working in the banking or financial services industry.

Requirements

  • Proven experience as a Data Engineer or in a similar role with a focus on Snowflake, Python, Pyspark, and SQL.
  • Strong understanding of data warehousing concepts and cloud data platforms, especially Snowflake.
  • Experience with Openshift
  • Hands-on experience with release management, deployment, and version control practices.
  • Solid understanding of banking and financial services industry data and compliance requirements.
  • Proficiency in Python scripting and Pyspark for data processing and automation.
  • Experience with ETL/ELT processes and tools.
  • Knowledge of data governance, security, and privacy standards.
  • Excellent problem-solving and analytical skills.
  • Strong communication and collaboration abilities.

Responsibilities

  • Design, develop, and maintain scalable and efficient data pipelines using Snowflake, Pyspark, and SQL.
  • Write optimized and complex SQL queries to extract, transform, and load data.
  • Develop and implement data models, schemas, and architecture that support banking domain requirements.
  • Collaborate with data analysts, data scientists, and business stakeholders to gather data requirements.
  • Automate data workflows and ensure data quality, accuracy, and integrity.
  • Manage and coordinate release processes for data pipelines and analytics solutions.
  • Monitor, troubleshoot, and optimize the performance of data systems.
  • Ensure compliance with data governance, security, and privacy standards within the banking domain.
  • Maintain documentation of data architecture, pipelines, and processes.
  • Stay updated with the latest industry trends and incorporate best practices.

Skills

DatabricksETLELTOpenShiftPythonPysparkRelease ManagementSnowflakeSQLVersion Control

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free