Skip to content
mimi

Data Engineer Intern

Anicca Data Science Solutions

Bellevue · On-site Full-time Entry Level $4k – $4k/mo Yesterday

About the role

We are seeking a skilled Data Engineer Intern with 2+ years of experience in designing, building, and optimizing data pipelines and integration solutions. The ideal candidate will have strong expertise in ETL/ELT processes, SQL optimization, and modern data tools across cloud environments.

Duration - 3 months

Key Responsibilities

• Design, develop, and maintain scalable ETL/ELT pipelines using SSIS, dbt, and Airbyte • Build and optimize data workflows to process large datasets efficiently • Write complex SQL queries, including joins, CTEs, window functions, and performance tuning • Develop and maintain data models and data warehouse architectures • Work with AWS services (Redshift, S3, Athena) for data storage, transformation, and analytics • Ensure data quality, integrity, and consistency across data pipelines • Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions • Implement and manage version control using Git and support CI/CD pipelines for automated deployments

Qualifications

• 2+ years of experience in data engineering or related roles • Strong experience with SSIS for ETL development and data integration • Advanced proficiency in SQL with a focus on performance optimization • Hands-on experience with dbt and Airbyte for data transformation and ingestion • Experience working with AWS data services, including Redshift, S3, and Athena • Solid understanding of data modeling, data warehousing concepts, and scalable architecture design • Familiarity with version control systems (Git) and CI/CD practices • Strong problem-solving skills and ability to work with large, complex datasets

Requirements

  • 2+ years of experience in data engineering or related roles
  • Strong experience with SSIS for ETL development and data integration
  • Advanced proficiency in SQL with a focus on performance optimization
  • Hands-on experience with dbt and Airbyte for data transformation and ingestion
  • Experience working with AWS data services, including Redshift, S3, and Athena
  • Solid understanding of data modeling, data warehousing concepts, and scalable architecture design
  • Familiarity with version control systems (Git) and CI/CD practices
  • Strong problem-solving skills and ability to work with large, complex datasets

Responsibilities

  • Design, develop, and maintain scalable ETL/ELT pipelines using SSIS, dbt, and Airbyte
  • Build and optimize data workflows to process large datasets efficiently
  • Write complex SQL queries, including joins, CTEs, window functions, and performance tuning
  • Develop and maintain data models and data warehouse architectures
  • Work with AWS services (Redshift, S3, Athena) for data storage, transformation, and analytics
  • Ensure data quality, integrity, and consistency across data pipelines
  • Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions
  • Implement and manage version control using Git and support CI/CD pipelines for automated deployments

Skills

ETL/ELTSQLSSISdbtAirbyteAWSRedshiftS3AthenaData modelingData warehousingGitCI/CD

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free