M
Senior Data Engineer
Mindlance
McLean · On-site Contract Senior Today
About the role
Overview
We are looking for a highly skilled Senior Data Engineer to design, build, and optimize scalable data pipelines and data platforms. The ideal candidate will have strong experience in cloud-based data architectures, big data processing, and modern data warehousing solutions. You will play a key role in enabling data-driven decision-making across the organization.
Key Responsibilities
- Design, develop, and maintain scalable ETL/ELT pipelines to process large volumes of structured and unstructured data
- Build and optimize data pipelines using PySpark, Spark, and Python
- Develop robust data solutions on AWS (S3, Glue, Lambda, Redshift, EMR)
- Implement and manage data workflows using Databricks
- Design and maintain data models and warehouses in Snowflake
- Ensure data quality, integrity, and governance across all data systems
- Optimize data processing performance and cost efficiency in cloud environments
- Collaborate with data scientists, analysts, and business stakeholders to deliver reliable data solutions
- Monitor, troubleshoot, and improve existing data pipelines and workflows
- Implement best practices for data security, compliance, and scalability
Required Skills & Qualifications
- Bachelor's or Master's degree in Computer Science, Engineering, or related field
- 6+ years of experience in data engineering or related roles
- Strong programming skills in Python
- Hands-on experience with Apache Spark and PySpark
- Extensive experience with AWS data services (S3, Glue, EMR, Lambda, Redshift)
- Solid experience with Snowflake data warehouse
- Experience working with Databricks platform
- Strong understanding of ETL/ELT frameworks and data pipeline design
- Experience with SQL and data modeling techniques
- Familiarity with workflow orchestration tools (Airflow, Step Functions, etc.)
- Knowledge of data governance, security, and compliance best practices
Preferred Qualifications
- Experience with real-time/streaming data processing (Kafka, Kinesis)
- Knowledge of CI/CD pipelines and DevOps practices
- Experience with containerization (Docker, Kubernetes)
- Exposure to data lake and lakehouse architectures
- Certification in AWS or Snowflake is a plus
Skills
AirflowApache SparkAWS LambdaAWS RedshiftAWS S3AWS Step FunctionsAWS GlueAWS EMRDatabricksDockerKubernetesKafkaKinesisPythonPySparkSnowflakeSQL
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free