ES
Data platform Engineer
Epergne Solutions
Remote · India Full-time Mid Level Yesterday
About the role
Job Role
Data platform Engineer
Job Location
India/Remote
Experience
3-5 Years
Roles & Responsibilities
- Design, build, and maintain scalable data pipelines and data platforms in cloud environments.
- Develop and optimize data ingestion, transformation, and processing workflows for large-scale datasets.
- Implement CI/CD pipelines, DevOps practices, and Infrastructure-as-Code for data platform deployment and management.
- Work with big data frameworks to process and analyze high-volume data efficiently.
- Ensure data governance, security, and compliance across data platforms.
- Optimize data performance, indexing, and query execution in distributed environments.
- Support real-time and batch data streaming architectures for analytics and operational workloads.
- Collaborate with cross-functional teams to deliver data-driven solutions and platform improvements .
Skills & Requirements
- Bachelor s degree in Technology, Engineering, or a related field .
- 3 5 years of experience in enterprise technology, data engineering, or platform operations.
- Proficiency in TypeScript, C#, Python, SQL, or Scala .
- Hands-on experience with CI/CD pipelines, DevOps practices, and Infrastructure-as-Code (AWS CDK, Azure Bicep).
- Experience with big data technologies such as Apache Spark, Hadoop, Kafka, or Flink .
- Knowledge of cloud platforms including AWS, Azure, or GCP and cloud-native data solutions such as BigQuery, Redshift, Snowflake, or Databricks .
- Understanding of data modeling, data warehousing, distributed systems, and data platform architecture .
- Experience with data ingestion pipelines, governance frameworks, and data security practices .
- Familiarity with multi-cloud environments, performance tuning, and distributed query optimization .
- Exposure to real-time and batch data processing architectures .
- Need to be proficiency with .
Requirements
- Hands-on experience with CI/CD pipelines, DevOps practices, and Infrastructure-as-Code (AWS CDK, Azure Bicep).
- Experience with big data technologies such as Apache Spark, Hadoop, Kafka, or Flink .
- Knowledge of cloud platforms including AWS, Azure, or GCP and cloud-native data solutions such as BigQuery, Redshift, Snowflake, or Databricks .
- Understanding of data modeling, data warehousing, distributed systems, and data platform architecture .
- Experience with data ingestion pipelines, governance frameworks, and data security practices .
- Familiarity with multi-cloud environments, performance tuning, and distributed query optimization .
- Exposure to real-time and batch data processing architectures .
Responsibilities
- Design, build, and maintain scalable data pipelines and data platforms in cloud environments.
- Develop and optimize data ingestion, transformation, and processing workflows for large-scale datasets.
- Implement CI/CD pipelines, DevOps practices, and Infrastructure-as-Code for data platform deployment and management.
- Work with big data frameworks to process and analyze high-volume data efficiently.
- Ensure data governance, security, and compliance across data platforms.
- Optimize data performance, indexing, and query execution in distributed environments.
- Support real-time and batch data streaming architectures for analytics and operational workloads.
- Collaborate with cross-functional teams to deliver data-driven solutions and platform improvements .
Skills
.NetApache SparkAWSAWS CDKAzureAzure BicepBigQueryC#DatabricksFlinkGCPHadoopInfrastructure-as-CodeKafkaPythonRedshiftScalaSQLSnowflakeTypeScript
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free