Senior PySpark Developer
Coforge
About the role
Job Description-
We are looking for a strong Data Engineer with hands-on experience in building data pipelines, performing transformations, and working with cloud-based data tools. The ideal candidate must have solid SQL skills, good understanding of data modeling, and hands-on exposure to DBT and cloud platforms (preferably GCP).
Key Responsibilities • Build and maintain scalable ETL/ELT pipelines. Develop and enhance DBT models and transformations. • Perform data cleansing, validation, and quality checks. Support cloud-based data engineering workloads (preferably GCP). • Write optimized SQL queries for analytics and data processing. • Collaborate with Data Engineers, Analysts, and Architects for requirement understanding. • Monitor pipeline health, troubleshoot issues, and ensure data reliability. • Document workflows, models, and mappings.
Required Skills (Must Have) • Data Modeling understanding (Star/Snowflake). • PySpark or similar distributed data processing experience. • Strong SQL skills. • Hands-on experience with DBT. • Experience with at least one major cloud platform: GCP (BigQuery, Dataflow, Dataproc, Composer, GCS preferred) OR AWS / Azure equivalent services. • Experience with data transformation and loading technologies. • Good knowledge of Python for data tasks.
Share your resume over Aarushi.Shukla@Coforge.Com
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free