Senior Data Engineer - GM Motorsports
Berrodin Parts Warehouse
About the role
The Role
We are seeking a Senior Data Engineer to join our Motorsports Data Engineering team, building next-generation data platforms that power high-performance racing programs across Formula 1, NASCAR, IndyCar, IMSA, and beyond.
This role is responsible for designing, building, and operating scalable, real-time and batch data pipelines that ingest high-frequency telemetry, simulation, wind tunnel, and trackside data into our enterprise data platform. You will work at the intersection of cloud architecture, streaming systems, and performance analytics — enabling engineers, strategists, and race teams to make faster, data-driven decisions.
As a senior member of the team, you will own critical components of our Kafka/Flink streaming architecture, Databricks lakehouse implementations, and infrastructure-as-code deployments. You will collaborate cross-functionally with race engineering, software development, operations, and external technical partners to ensure resilient, secure, and high-performance data delivery across environments.
This is a hands‑on engineering role requiring deep technical expertise, architectural thinking, and a strong sense of ownership.
What You’ll Do
- Develop data pipelines using Python, Java, and SQL among other tools and technologies.
- Update existing software and/or develop new software solutions to address a specific need or solve a particular business problem.
- Contribute to development in accordance with appropriate methodologies and application of a repeatable, systematic, and quantifiable approach.
- Identify and remediate software issues related to code quality, security, patterns, frameworks, software usability, or end user related issues.
- Develop your skills by working closely with peers to ensure code alignment with design patterns and frameworks.
- Integrate with other applications and systems.
- Automate unit and end-to-end testing of software systems within their domain, focused on software quality and maintainability.
- Provide guidance and
Requirements
- This is a hands-on engineering role requiring deep technical expertise, architectural thinking, and a strong sense of ownership
Responsibilities
- This role is responsible for designing, building, and operating scalable, real-time and batch data pipelines that ingest high-frequency telemetry, simulation, wind tunnel, and trackside data into our enterprise data platform
- You will work at the intersection of cloud architecture, streaming systems, and performance analytics — enabling engineers, strategists, and race teams to make faster, data-driven decisions
- As a senior member of the team, you will own critical components of our Kafka/Flink streaming architecture, Databricks lakehouse implementations, and infrastructure-as-code deployments
- You will collaborate cross-functionally with race engineering, software development, operations, and external technical partners to ensure resilient, secure, and high-performance data delivery across environments
- Develop data pipelines using Python, Java, and SQL among other tools and technologies
- Update existing software and/or develop new software solutions to address a specific need or solve a particular business problem
- Contribute to development in accordance with appropriate methodologies and application of a repeatable, systematic, and quantifiable approach
- Identify and remediate software issues related to code quality, security, patterns, frameworks, software usability, or end user related issues
- Develop your skills by working closely with peers to ensure code alignment with design patterns and frameworks
- Integrate with other applications and systems
- Automate unit and end-to-end testing of software systems within their domain, focused on software quality and maintainability
- Provide guidance and
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free