Skip to content
mimi

(Junior) Cloud Data Engineer (all genders)

DYMATRIX GmbH

Karlsruhe · On-site Entry Level 1mo ago

About the role

About

At DYMATRIX, we passionately develop innovative software solutions for data-driven, AI-powered customer experiences. Our SaaS platform enables companies to address their contacts automatically and personally across all channels. But we don't just deliver the technology – our experts support customers from strategy and implementation to ongoing operations. Over 200 well-known companies from the DACH region, from medium-sized businesses to DAX corporations, from industries such as retail, media, banking, and insurance trust us. To achieve this, we need a strong team – we need you!

Requirements

  • First programming skills (ideally in C#, Python, and PySpark).
  • SQL knowledge for data analysis and quality assurance.
  • Understanding of cloud architectures and data processing.
  • Interest in conceptual topics related to data modeling and ETL design.
  • Willingness to further develop in technologies such as Databricks and Data Factory.

Responsibilities

  • Development of Azure Functions in .NET C# for mapping data transformations in real-time. REST API.
  • Creation and maintenance of Azure Data Factory Pipelines for automating data transfers.
  • Development of Azure Databricks Notebooks, especially for implementing ETL processes with PySpark – initial programming skills and a good understanding of data processing are required.
  • Working with SQL, including performing quality assurance, data analysis, and targeted performance optimizations on storage technologies such as Cosmos DB, Azure SQL DB, and Data Lake Storage.
  • Optimization and fine-tuning of processes through detailed analysis, including with Log Analytics.

About the Role

General: In an interdisciplinary project team, you will take on the technical implementation of data processing pipelines in batch and real-time. As an expert in data engineering, you will be the point of contact for colleagues and customers in the project. Using modern cloud-based technologies, you will generate innovative solutions around our products – supported by AI.

We are looking for you with and without professional experience, as a (Junior) Data Engineer (all genders) for our locations in Karlsruhe, Stuttgart, Hamburg, Munich, or Cologne.

Tech Stack:

  • Azure SQL Database, Azure Cosmos DB, and Data Lake Storage
  • Other components from the Azure Stack such as Log Analytics, Key Vault, API Management
  • Azure Portal
  • Company's own OpenAI-based AI assistant
  • Project management in JIRA and Confluence

Requirements

  • Erste Kenntnisse Programmierkenntnisse (idealerweise in C#, Python und Pyspark).
  • SQL-Kenntnisse zur Datenanalyse und Qualitätssicherung.
  • Verständnis für Cloud-Architekturen und Datenverarbeitungsprozesse.
  • Interesse an konzeptionellen Themen rund um Datenmodellierung und ETL-Design.
  • Bereitschaft zur Weiterentwicklung in Technologien wie Databricks und Data Factory.

Responsibilities

  • Entwicklung von Azure Functions in .NET C# zur Abbildung von Datentransformationen in Realtime.
  • Erstellung und Pflege von Azure Data Factory Pipelines zur Automatisierung von Datentransfers.
  • Entwicklung von Azure Databricks Notebooks, insbesondere zur Implementierung von ETL-Prozessen mit PySpark.
  • Arbeiten mit SQL, inklusive der Durchführung von Qualitätssicherungen, Datenanalysen und gezielten Performanceoptimierungen auf Speichertechnologien wie Cosmos DB, Azure SQL DB und Data Lake Storage.
  • Optimierung und Feintuning der Prozesse durch Detailanalysen u. a. mit Log Analytics.

Skills

.NETAPI ManagementAzure Cosmos DBAzure Data FactoryAzure FunctionsAzure PortalAzure SQL DatabaseAzure StackC#ConfluenceData Lake StorageDatabricksETLJIRAKeyVaultLog AnalyticsOpenAIPythonPySparkREST APISQL

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free