JC
Data Engineer III
JPMC Candidate Experience page
Jersey City · On-site Full-time Senior 4d ago
About the role
About
Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team.
As a Data Engineer III at JPMorgan Chase within the Consumer and Community Bank - Connected Commerce Technology team, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives.
Job responsibilities
- Supports review of controls to ensure sufficient protection of enterprise data
- Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request
- Updates logical or physical data models based on new use cases
- Frequently uses SQL and understands NoSQL databases and their niche in the marketplace
- Adds to team culture of diversity, opportunity, inclusion, and respect
Required qualifications, capabilities, and skills
- Formal training or certification on data engineering concepts and 3+ years applied experience
- Experience across the data lifecycle
- Experience in ETL process/Advance concepts .
- Advanced at SQL (e.g., joins and aggregations)
- Experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark (secondary alternative: Java)
- Working understanding of NoSQL databases
- Significant experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis
- Experience customizing changes in a tool to generate product
- Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools
Preferred qualifications, capabilities, and skills
- Python Advance development skills / Kafka & S3 integration in Performance optimization
- Experience in carrying out data analysis to support business insights
- Strong in PySpark, AWS & Snowflake
Skills
AWSAVROCI/CDData VaultDockerIcebergInmonJavaJSONKimballNoSQLParquetProtobufPythonPySparkSQLSnowflakeUnix scripting
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free