Skip to content
mimi

Data Engineer III

Brooksource

Birmingham · On-site Contract Senior 3d ago

About the role

ABOUT THE ROLE

Our Client is seeking an experienced Data Engineer to design, develop, and support robust data engineering and analytics solutions. In this role, you will leverage your 5-10 years of experience manipulating data in a software engineering capacity to work with both relational and NoSQL systems, as well as data lakes. You will be responsible for normalizing databases, constructing analyzable datasets, and integrating raw data from multiple sources into consistent, machine-readable formats that support company requirements. The ideal candidate will possess deep knowledge of SQL, data modeling, and big data tools such as Spark, Hive, Airflow, and Databricks, with hands-on experience in both batch and real-time data processing. You will also work with data quality tools, data storage techniques, and develop solutions using on-premises (MSBI, Informatica, Oracle, SQL Server) and Microsoft cloud-based tools (Azure Data Lake, Data Factory, Databricks, Synapse, Power BI). Additional skills in containerization (Docker, OpenShift), Agile and DevOps methodologies, and developing data solutions using APIs and web services are highly valued. Experience with statistical models and AI/ML solutions using R or Python is also desirable.

WHAT YOU'LL DO

  • Design, develop, test, deploy, and support data engineering and analytics solutions using both on-premises and cloud-based tools.
  • Manipulate and normalize data across relational, NoSQL, and data lake environments to meet application and business requirements.
  • Construct, combine, and transform raw data from multiple sources into consistent, machine-readable, and analysis-ready datasets.
  • Develop and maintain Databricks pipelines for diverse data sources, ensuring efficient data integration and processing.
  • Work with batch and real-time data processing frameworks to support various business needs.
  • Implement and optimize data models, schemas, and storage techniques for a variety of data sources and structures.
  • Utilize big data technologies such as Hadoop, Hive, and Spark to build scalable data engineering solutions.
  • Develop and support statistical models and AI/ML solutions using R and/or Python.
  • Create functional and technical designs for data engineering and analytics projects, ensuring alignment with business objectives.
  • Ensure data quality and integrity using appropriate tools and methodologies.
  • Apply containerization (Docker, OpenShift), Agile, DevOps, and CI/CD practices in solution development and deployment.
  • Design and develop data sourcing, enrichment, and delivery solutions using APIs and web services.

WHAT YOU BRING

  • 5-10 years of experience manipulating data in a software engineering capacity.
  • Expertise with relational, NoSQL systems, and data lakes.
  • Proficiency in database normalization and data structure design to meet application requirements.
  • Ability to construct and combine datasets from multiple sources into machine-readable formats.
  • Deep knowledge of SQL, data modeling, Spark, Hive, and Airflow.
  • Experience creating and maintaining Databricks pipelines for diverse data sources.
  • Hands-on experience with batch and real-time data processing frameworks.
  • Experience with data modeling, data access, schemas, and storage techniques.
  • Familiarity with data quality tools and methodologies.
  • Experience creating functional and technical designs for data engineering and analytics solutions.
  • Experience implementing data models with various schemas and data source types.
  • Hands-on experience with big data technologies (Hadoop, Hive, Spark).
  • Experience developing and supporting statistical models, R, and/or Python-based AI/ML solutions.
  • 5+ years designing, developing, testing, deploying, and supporting data engineering solutions using on-premises tools (MSBI, Informatica, Oracle GoldenGate, SQL, Oracle, SQL Server).
  • 3+ years designing, developing, testing, deploying, and supporting data engineering solutions using Microsoft cloud tools (Azure Data Lake, Data Factory, Databricks, Python, Synapse, Key Vault, Power BI).
  • Experience with containerization (Docker, OpenShift, etc.).
  • Experience with Agile, DevOps, and CI/CD methodologies.
  • Hands-on experience with data sourcing, enrichment, and delivery using APIs and Web Services.

Skills

AgileAirflowAPIsAzure Data FactoryAzure Data LakeAzure Key VaultAzure SynapseCI/CDData ModelingDatabricksDevOpsDockerHadoopHiveInformaticaMSBINoSQLOpenShiftOracleOracle GoldenGatePower BIPythonRSparkSQLSQL ServerWeb Services

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free