LV
Werkstudent (m/w/d) Data Engineering - Schwerpunkt Energiemanagement
Leipziger Verkehrsbetriebe GmbH
Leipzig · On-site Entry Level 2w ago
About the role
About
Bring your data power and actively shape the efficiency increase of our energy management in public transport with smart solutions. You will work closely with our Data Architects and Developers and independently take on tasks within clearly defined areas.
Responsibilities
- Development of scalable data models to support analytics applications
- Implementation of Data Lake and Data Warehouse structures with clear quality and transformation rules
- Setup and operation of data pipelines (Batch/Streaming) – including monitoring and error management
- Preparation of data for exploratory analyses
- Provision for Frontend/Analytics
Offer
- Challenging task and opportunities for development
- Guidance and onboarding by members of the Analytics team as well as targeted feedback for your personal development
- Insights into the field of Data Engineering by actively participating and building references
- Central location and very good accessibility by public transport
Requirements
- Student (m/f/d) in a STEM field of study
- Preferably nearing completion of a Bachelor's degree or already in a Master's program
- Basic practical experience with Python, PySpark, Pandas, SQL
- Additional practical experience with Git, AWS/Azure desirable
- Very good German language skills
- Active participation in an agile team, independent handling of tasks within the defined scope, and quick and reliable acquisition of new topics
Requirements
- Grundlegendes Praxiserfahrungen mit Python, PySpark, Pandas, SQL
- Sehr gute Deutschkenntnisse
- Aktive Mitarbeit in einem agilen Team, selbstständige Bearbeitung von Aufgaben im definierten Rahmen sowie schnelle und zuverlässige Aneignung neuer Themen
Responsibilities
- Entwicklung skalierbarer Datenmodelle zur Unterstützung von Analytics‑Anwendungen
- Implementierung von Data Lake und Data Warehouse Strukturen mit klaren Qualitäts- und Transformationsregeln
- Aufbau und Betrieb von Datenpipelines (Batch/Streaming) – inkl. Monitoring und Fehlermanagement
- Aufbereitung von Daten für explorative Analysen
- Bereitstellung für Frontend/Analytics
Skills
AWSAzureGitPandasPythonPySparkSQL
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free