Skip to content
mimi

Data Engineer (Python | Snowflake / Databricks)

Jobs via Dice

Princeton · On-site Full-time Mid Level Today

About the role

Role Summary:

We are looking for a skilled Data Engineer with strong expertise in Python, data processing, and modern cloud data platforms like Snowflake or Databricks. The role involves building scalable data pipelines, enabling analytics, and supporting data-driven decision-making across the organization.

Key Responsibilities:

  • Design, build, and maintain scalable data pipelines (ETL/ELT)
  • Develop data workflows using Python
  • Work with large datasets in structured and unstructured formats
  • Implement data solutions using Snowflake or Databricks
  • Optimize data pipelines for performance, reliability, and cost
  • Build and manage data models, data marts, and data lakes
  • Integrate data from multiple sources (APIs, databases, streaming)
  • Ensure data quality, governance, and security
  • Collaborate with data analysts, scientists, and business teams
  • Automate workflows using orchestration tools (Airflow, etc.)

Required Skills & Experience:

Programming & Data

  • Strong proficiency in Python
  • Experience with: Pandas, NumPy, PySpark
  • Solid understanding of: Data structures & algorithms, Data processing techniques

Data Engineering

  • Hands-on with: ETL/ELT pipeline development, Data warehousing concepts, Batch & streaming data processing

Platforms

  • Experience with at least one: Snowflake, Databricks
  • Knowledge of: Delta Lake (for Databricks), Snowflake features (streams, tasks, warehouses)

Databases

  • Strong SQL skills
  • Experience with: Relational DBs (PostgreSQL, MySQL), NoSQL (MongoDB – optional)

Cloud (Preferred)

  • Experience in: AWS (S3, Glue, Redshift), Azure (ADF, ADLS), Google Cloud Platform (BigQuery)

Tools

  • Apache Airflow / Prefect
  • Git, CI/CD pipelines
  • Docker (good to have)

Experience:

  • 3–8+ years in Data Engineering or similar role

Certifications (Optional but Valuable):

  • Snowflake SnowPro Certification
  • Databricks Certified Data Engineer
  • AWS / Azure Data certifications

Nice to Have:

  • Experience with real-time streaming (Kafka, Spark Streaming)
  • Knowledge of data governance & catalog tools
  • Exposure to Machine Learning pipelines
  • Understanding of data security & compliance

Soft Skills:

  • Strong analytical and problem-solving mindset
  • Ability to work with cross-functional teams
  • Clear communication and documentation skills

Typical Use Cases in This Role:

  • Building data pipelines for analytics dashboards
  • Supporting ML models with clean, structured data
  • Migrating legacy data warehouses to cloud platforms
  • Creating scalable data lake architectures

Skills

AirflowAWSAzureBigQueryCI/CDDatabricksDelta LakeDockerGCPGitGlueGoogle Cloud PlatformKafkaMachine LearningMongoDBMySQLNoSQLNumPyPandasPostgreSQLPrefectPythonRedshiftSpark StreamingSparkSQLSnowflakeS3Streaming

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free