Skip to content
mimi

Senior Data Engineer - Ag Trading

Cargill

Bengaluru · On-site Full-time Senior Today

About the role

Job Purpose and Impact

The Senior Professional, Data Engineering job designs, builds and maintains complex data systems that enable data analysis and reporting. With minimal supervision, this job ensures that large sets of data are efficiently processed and made accessible for decision making.

Key Accountabilities

  • DATA INFRASTRUCTURE: Prepares data infrastructure to support the efficient storage and retrieval of data.
  • DATA FORMATS: Examines and resolves appropriate data formats to improve data usability and accessibility across the organization.
  • DATA & ANALYTICAL SOLUTIONS: Develops complex data products and solutions using advanced engineering and cloud based technologies, ensuring they are designed and built to be scalable, sustainable and robust.
  • DATA PIPELINES: Develops and maintains streaming and batch data pipelines that facilitate the seamless ingestion of data from various data sources, transform the data into information and move to data stores like data lake, data warehouse and others.
  • DATA SYSTEMS: Reviews existing data systems and architectures to identify areas for improvement and optimization.
  • STAKEHOLDER MANAGEMENT: Collaborates with multi-functional data and advanced analytic teams as well as with business teams to gain requirements and ensure that data solutions meet the functional and non-functional needs of various partners.
  • DATA FRAMEWORKS: Builds complex prototypes to test new concepts and implements data engineering frameworks and architectures that improve data processing capabilities and support advanced analytics initiatives.
  • AUTOMATED DEPLOYMENT PIPELINES: Develops automated deployment pipelines improving efficiency of code deployments with fit for purpose governance.
  • DATA MODELING: Performs complex data modeling in accordance to the datastore technology to ensure sustainable performance and accessibility.

Qualifications

  • Minimum requirement of 4 years of relevant work experience.
  • Typically reflects 5 years or more of relevant experience.
  • TECHNICAL SKILLS REQUIRED:
    • Data Platform Design - Designing scalable ELT data platforms on Snowflake supporting batch and real-time workloads
    • Advanced Python Engineering - Building production-grade Python pipelines and reusable data frameworks, with working knowledge of .NET services and integrations
    • Snowflake & Relational Database Expertise - Deep knowledge of Snowflake architecture, advanced SQL, and experience working with Oracle, SQL Server, and PostgreSQL
    • Batch & Real-Time Processing - Designing and operating reliable batch and streaming / real-time data pipelines using Apache Kafka and Apache Pulsar
    • Performance & Cost Optimization - Optimizing Snowflake queries, warehouse usage, and Python workloads for efficiency and scale
    • Security & Governance - Implementing access controls, data protection, and secure data-sharing patterns across data platforms
    • Reliability & Data Quality - Ensuring pipeline resilience, monitoring, and data quality across critical datasets
    • GenAI Enablement - Enabling GenAI use cases through high-quality data pipelines, including preparation of structured and unstructured data, embeddings, and integration with OpenAI (e.g., RAG-style workflows)

PREFERED COMPETENCIES

  • Proven experience working in the Trading and / or Finance industry
  • Proven experience with MS Power BI and Tableau

#HiPo

Requirements

  • Minimum 4 years of relevant work experience (typically 5+ years)
  • Design scalable ELT data platforms on Snowflake supporting batch and real-time workloads
  • Build production‑grade Python pipelines and reusable data frameworks
  • Working knowledge of .NET services and integrations
  • Deep knowledge of Snowflake architecture and advanced SQL
  • Experience with Oracle, SQL Server, and PostgreSQL databases
  • Design and operate reliable batch and streaming data pipelines using Apache Kafka and Apache Pulsar
  • Optimize Snowflake queries, warehouse usage, and Python workloads for efficiency and scale
  • Implement access controls, data protection, and secure data‑sharing patterns
  • Ensure pipeline resilience, monitoring, and data quality across critical datasets
  • Enable GenAI use cases through high‑quality data pipelines, embeddings, and OpenAI integration
  • Proven experience working in the Trading and/or Finance industry
  • Experience with MS Power BI and Tableau

Responsibilities

  • Prepare data infrastructure to support efficient storage and retrieval
  • Examine and resolve data formats to improve usability and accessibility
  • Develop complex data products and solutions using advanced engineering and cloud technologies
  • Develop and maintain streaming and batch data pipelines for data ingestion and transformation
  • Review existing data systems and architectures to identify improvement opportunities
  • Collaborate with multi-functional data, analytics, and business teams to gather requirements
  • Build complex prototypes and implement data engineering frameworks and architectures
  • Develop automated deployment pipelines to improve code deployment efficiency
  • Perform complex data modeling aligned with datastore technologies

Skills

SnowflakeELTPython.NETOracleSQL ServerPostgreSQLApache KafkaApache PulsarSQLData modelingData pipelinesBatch processingReal‑time processingPower BITableauGenAIOpenAIData governanceData security

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free