Data Engineer
BC Financial Services Authority
About the role
Positions Overview
We are recruiting for two key Data Engineering roles within the British Columbia Financial Services Authority (BCFSA): one permanent position and one temporary (1‑year term) position. Candidates applying through this competition may be considered for two levels based on demonstrated strengths across platforms, operations, machine learning, and/or software delivers:
- Senior Data Engineer (L4) (salary band: $90,069 - $140,235)
- Data Engineer (L3) (salary band: $73,182 - $107,310)
These roles are central to advancing BCFSA’s data strategy and enabling high‑quality, reliable, and scalable data solutions across the organization. Successful candidates will contribute to designing, building, and optimizing data pipelines, ensuring data quality and governance, and supporting analytics and regulatory functions across the financial services sector we oversee.
Our Data Engineers play a critical role in enabling evidence‑based decision‑making, strengthening regulatory insight, and supporting BCFSA’s mandate to protect consumers and promote a stable, transparent, and fair financial services marketplace.
APPLICATION AND SELECTION PROCESS
Candidates will be assessed for the level that best aligns with their skills, experience, and technical depth. This approach allows us to consider a broad and diverse pool of applicants while ensuring a fair and consistent evaluation process.
Assessment Criteria
Applications will be assessed based on qualifications, including education, experience, and relevant skills. We will carefully review each candidate’s background to determine which role they are most suited for.
Selection for Roles
Candidates who meet the qualifications will be invited to participate in the competition for the role that aligns best with their experience and education. This ensures a fair and targeted approach to matching candidates with the most appropriate role.
ACCOUNTABILITIES
- Design, build, and operate production‑ready data pipelines that support regulatory, operational, and analytical use cases across BCFSA’s current and evolving data platforms, including Databricks and other enterprise data and analytics platforms.
- Make high quality, well-modeled data available to a range of data consumers, including data and business analysts, citizen report developers, and application teams.
- Operationalize data delivery, including deployment, monitoring, troubleshooting, and continuous improvement of data pipelines.
- Curate, review, and productionize datasets and pipelines created by analysts or non‑technical users, ensuring they meet quality, performance, and reliability expectations.
- Enable the effective use of data and analytics outputs within downstream business processes, reports, dashboards, and applications.
- Build and operate feature pipelines and MLOps workflows to enable deployment, monitoring, and lifecycle management of machine learning models (batch and/or real-time) in alignment with security and governance requirements.
- Design and deliver end-to-end data solutions that include software components and integrations that operationalize curated datasets for downstream systems and users.
- Ensure all data solutions comply with established data governance, security, privacy, and access control requirements.
- Partner with business and data analysts to support reporting and visualization workflows, including guidance on data modeling and interpretation where needed.
- Develop and support ETL/ELT processes, with a strong focus on performance, reliability, and maintainability.
- Support data migration and modernization initiatives involving legacy and existing platforms (e.g., SQL Server, Synapse, Dataverse, SharePoint, and other enterprise systems), working closely with the Data Architect, Principal Data Engineer, and application teams.
- Provide coaching, guidance and support to employees, management, and leadership on data engineering and analytics enablement topics.
- Contribute to the development and improvement of team practices, documentation, tools, and reusable components.
- Provide orientation, knowledge transfer, and training to internal and external stakeholders within area of responsibility.
- Participate in and support broader organizational business and data transformation initiatives.
JOB REQUIREMENTS
- Working experience with Azure Cloud services and analytics platforms, including Azure Data Factory, Azure Databricks, Azure Data Lake, Azure Functions, Azure Key Vault, Synapse, GIT-based version control, SQL Server, and CI/CD.
- Working experience in SQL, Python, Spark or similar data processing frameworks.
- Working experience in data management, ingestion, transformation, orchestration, data modelling, and data warehousing.
- Experience designing and maintaining CI/CD pipelines for data pipelines and orchestration frameworks
- Experience designing and implementing test automation for data including unit, schema, contract, and data quality tests
- Experience productionizing machine learning solutions, including feature engineering, automated deployment (CI/CD), and monitoring of data/model performance and drift (MLOps)
- Strong software engineering experience delivering production services and integrations (e.g., REST APIs), including testing practices, CI/CD, and operational support/observability.
- Demonstrated ability to take ownership of assigned solutions, working independently while aligning with established architecture and standards.
- Strong problem-solving and analytical skills required to understand the requirements of both business and technology stakeholders to ensure data solutions align with organizational goals.
- Exceptional verbal and written communication skills, enabling clear and effective articulation of data-related solutions and challenges.
- Familiarity with Agile methodologies and DevOps practices in data analytics environments.
- Familiarity with data governance concepts or metadata management is an asset.
- Experience integrating data pipelines with internal and external ERP/CRM platforms (e.g., Oracle, Workday, Dynamics) using API connectors, webhooks, and modern ETL tools is an asset.
- Exposure to advanced analytics, data science, machine learning, or AI‑enabled workflows is an asset.
- Relevant Azure and Databricks certifications are an asset.
EDUCATION
- Bachelor’s degree in Computer Science, Software Engineering, Information Technology, Data Science, or related quantitative or technical discipline. A Master’s degree in a related field is an asset
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free