Skip to content
mimi

Java backend with Spark Developer

Jobs via Dice

New York · Hybrid Contract Senior 2d ago

About the role

Role

  • Java backend with Spark Developer

Duration

  • Long Term Contract

Location

  • New York Hybrid

About

Senior Java Developer FRTB Risk Engines within Market Risk Department interfaces with various systems and obtains valuation Greeks and risk sensitivities along with trade attributes for trades across various asset classes. FRTB applications have many roles to play namely reference data management, ingestion of data, calculation of charge, tooling for analysis and reporting. The distributed processing platform is event based and leverages big data technologies such as Spark and Green plum.

The ideal candidate will have extensive hands‑on experience designing, building and integrating analytical systems in a multitier data‑centric environment. Experience with large‑scale relational databases, strong SQL, Java and Linux are essential. Working knowledge of big data technologies such as Spark is a plus. The candidate will work in an agile squad to design and implement solutions following a Service Oriented Architecture (SOA).

We are looking for candidates with experience in Core Java, Apache Spark, DB, SQL. Application server‑side development. Having knowledge on distributed computing, handling of high volume of data, process optimization, reducing run time, etc. will add value to the candidate.

Responsibilities

  • Work on developing new and enhancing existing Market Risk applications
  • Be part of an Agile squad with members in Montreal, Budapest, India, London and New York following Agile principles and applying DevOps practices
  • Be able to work with our business partners
  • Shape the tooling and technology landscape of Risk Management by introducing tools enabling better business processes required for meeting the firm’s regulatory obligations

Requirements

  • 5 years hands‑on experience with Core Java, server‑side, Spring, RDBMS
  • Experience with distributed data processing pipelines using Apache Spark, Python and other tools and languages
  • Strong object‑oriented design and development skills, data structures and algorithms, and design patterns
  • A good understanding of how to build multithreaded applications and hands‑on experience with concurrency packages
  • Excellent critical thinking, analytical ability
  • Experience with distributed data processing pipelines using Spark, Hive, Python and other tools and languages
  • A culture of incorporating unit test cases when designing systems using JUnit
  • Strong experience with relational databases, logical modelling
  • Strength in querying large relational databases in an optimized manner
  • Ability to write scripts in Shell, Perl, Python
  • Agile development experience
  • Strong collaboration and communication skills; the candidate will work in a global team where clear and concise communication skills are a must
  • Work independently following proper coding standards

Nice to Have

  • Risk/Financial Systems development experience
  • Automated testing

Skills

Apache SparkCore JavaDBGreenplumHiveJavaJunitLinuxPerlPythonRDBMSRelational databasesShellSQLSparkSpring

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free