Skip to content
mimi

Senior / Lead Data Engineer

Precision Technologies

New York · On-site Full-time Senior 3d ago

About the role

Senior / Lead Data Engineer

Location

New York, United States

Experience

8+ Years

Employment Type

Full-time and w2 Contract with client. No C2C

About the Role

We’re looking for an experienced Senior or Lead Data Engineer to join our client projects across diverse domains - finance, healthcare, retail, and cloud-based ecosystems. This role involves designing, building, and optimizing large-scale data processing systems, ensuring data availability and integrity across platforms.

Key Responsibilities

  • Architect, build, and maintain scalable data pipelines and ETL processes using modern tools and cloud technologies
  • Develop and deploy data ingestion frameworks for structured and unstructured data sources.
  • Work closely with data scientists, analysts, and DevOps teams to design efficient data models and ensure seamless integrations.
  • Optimize performance of large-scale data lakes and warehouses.
  • Implement data governance, lineage, and quality frameworks.
  • Lead and mentor junior data engineers and ensure best practices in data engineering.

Required Skills

  • 8+ years of hands-on experience in data engineering, ETL development, and pipeline automation.
  • Strong expertise in at least one cloud platform - AWS, Azure, or GCP.
  • Hands-on with Spark, PySpark, Databricks, Kafka, Snowflake, Airflow, SQL, and Python.
  • Experience with data warehousing (Redshift, BigQuery, Synapse, etc.) and streaming technologies.
  • Solid understanding of data modeling, orchestration, and performance tuning.
  • Excellent communication skills and ability to work in collaborative, cross-functional teams.

Nice to Have

  • Experience in data modernization or migration projects.
  • Familiarity with CI/CD pipelines and containerization (Docker/Kubernetes).

Contact

📩 Interested candidates can apply directly or share resumes at
raviteja.d@precisiontechcorp.com
• Let’s build the future of data together!

Requirements

  • 8+ years of hands-on experience in data engineering, ETL development, and pipeline automation.
  • Strong expertise in at least one cloud platform - AWS, Azure, or GCP.
  • Hands-on with Spark, PySpark, Databricks, Kafka, Snowflake, Airflow, SQL, and Python.
  • Experience with data warehousing (Redshift, BigQuery, Synapse, etc.) and streaming technologies.
  • Solid understanding of data modeling, orchestration, and performance tuning.
  • Excellent communication skills and ability to work in collaborative, cross-functional teams.

Responsibilities

  • Architect, build, and maintain scalable data pipelines and ETL processes using modern tools and cloud technologies
  • Develop and deploy data ingestion frameworks for structured and unstructured data sources.
  • Work closely with data scientists, analysts, and DevOps teams to design efficient data models and ensure seamless integrations.
  • Optimize performance of large-scale data lakes and warehouses.
  • Implement data governance, lineage, and quality frameworks.
  • Lead and mentor junior data engineers and ensure best practices in data engineering.

Skills

AirflowAWSAzureDatabricksDockerETLGCPKafkaKubernetesPythonRedshiftSQLSparkSynapseSnowflakePySpark

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free