Senior security engineer, senior it security, senior engineer it
PAIR Finance
About the role
About
We're looking for a skilled AI Security Engineer (f/m/d) to strengthen the security of our intelligent systems. You'll be instrumental in protecting our AI/ML pipelines, embedding security best practices across the full machine learning lifecycle, and ensuring compliance with evolving industry standards. Working closely with data scientists, ML engineers, and platform teams, you'll define and implement concrete security controls across our pipelines, infrastructure, and applications.
Bringing your solid foundation in AI/ML security, you'll own risk assessments, shape governance documentation, and translate the EU AI Act and financial-services regulations into practical technical controls. You'll also support monitoring, incident response, and AI vendor evaluations in collaboration with Legal, Compliance, and Procurement. If you're passionate about solving complex security challenges at the intersection of AI and fintech, we invite you to join us and make a real impact.
This role is based in Berlin, and we have a hybrid working policy. Modern office near Uhlandstraße is complete with fresh fruit, muesli and drinks for a comfortable and enjoyable workplace.
For more information about PAIR Finance and career opportunities, please visit our website and our careers page.
Strengthened by one of the most renowned private equity firms in the fintech sector, Pollen Street, as well as partnerships with other investors such as Zalando Payments, PAIR Finance offers an excellent opportunity to dive deep into and actively shape the fintech industry.
We welcome applications from all qualified individuals regardless of ethnicity, color, religion, gender, sexual orientation, gender identity or expression, age, national origin, marital status, or disability.
Responsibilities
- Conduct AI-specific threat modeling and security reviews across the ML lifecycle (data → training → deployment → monitoring).
- Perform security testing / red-teaming of LLM and ML systems (e.g. prompt injection tests, jailbreaks, exfiltration and data-leakage tests).
- Work closely with data scientists, Machine Learning engineers, platform engineers and Compliance & IT Security to define and implement concrete controls in pipelines, infrastructure and applications.
- Help define monitoring, logging and incident response for AI/LLM systems, including misuse and data-leak detection.
- Collaborate with Legal, Compliance and Procurement on AI vendor selection, risk assessments and contract reviews.
Requirements
- Demonstrable experience in Artificial Intelligence/Machine Learning security in a production context – not just general cybersecurity.
- Practical knowledge of LLM-specific risks, such as:
- data leakage and sensitive information exposure
- Solid understanding of the ML lifecycle and typical MLOps setups (data pipelines, training, evaluation, deployment, CI/CD, monitoring) and where to place security controls.
- Experience designing or reviewing secure architectures for AI/LLM systems, including:
- API security and authentication/authorization
- isolation of tenants/contexts and access control for data sources & vector stores
- protection of sensitive data in prompts, logs and training data.
- Experience working side-by-side with data scientists or ML engineers – you have credibility in technical rooms and can challenge design decisions constructively.
- Ability to read Python code and basic ML pipelines and to build small scripts/tools (e.g. for automated tests, log analysis, or prototype guardrails).
- OWASP LLM Top 10
- Understanding of EU AI Act obligations and how they apply to a fintech / financial services context, with the ability to map them to concrete controls.
- Strong grasp of data protection and privacy-by-design in AI (data minimisation, pseudonymisation/anonymisation, retention and deletion of training and log data).
- Experience with logging, monitoring and incident response for AI or other high-risk systems.
- Background in financial services or fintech, or another highly regulated industry.
What We Offer
- Strong experienced international team to support and mentor you along the way, smooth onboarding process
- International team of 30+ nationalities with professionals and experts
- Flat hierarchy, transparent and appreciative feedback culture, monthly all hands meetings, annual feedback and evaluation cycle, regular 1-on-1s with your lead
- Well-structured onboarding process as well as supportive and welcoming colleagues
- Personal learning & development budget as well as German and English language courses
- Good salary for your strong performance
- Unlimited employment contract, flexible working hours and 28 vacation days for your work-life balance
- Company pension plan, partly covered Deutschlandticket (public transport) and access to “Corporate Benefits” voucher platform to ensure your full well-being
- Fun company summer and Christmas parties as well as regular team events
Requirements
- Demonstrable experience in Artificial Intelligence/Machine Learning security in a production context – not just general cybersecurity.
- Practical knowledge of LLM-specific risks, such as: data leakage and sensitive information exposure
- Solid understanding of the ML lifecycle and typical MLOps setups (data pipelines, training, evaluation, deployment, CI/CD, monitoring) and where to place security controls.
- Experience designing or reviewing secure architectures for AI/LLM systems, including: API security and authentication/authorization, isolation of tenants/contexts and access control for data sources & vector stores, protection of sensitive data in prompts, logs and training data.
- Experience working side-by-side with data scientists or ML engineers – you have credibility in technical rooms and can challenge design decisions constructively.
- Ability to read Python code and basic ML pipelines and to build small scripts/tools (e.g. for automated tests, log analysis, or prototype guardrails).
- OWASP LLM Top 10
- Understanding of EU AI Act obligations and how they apply to a fintech / financial services context, with the ability to map them to concrete controls.
- Strong grasp of data protection and privacy-by-design in AI (data minimisation, pseudonymisation/anonymisation, retention and deletion of training and log data)
- Experience with logging, monitoring and incident response for AI or other high-risk systems.
- Background in financial services or fintech, or another highly regulated industry.
Responsibilities
- Conduct AI-specific threat modeling and security reviews across the ML lifecycle (data → training → deployment → monitoring).
- Perform security testing / red-teaming of LLM and ML systems (e.g. prompt injection tests, jailbreaks, exfiltration and data-leakage tests).
- Work closely with data scientists, Machine Learning engineers, platform engineers and Compliance & IT Security to define and implement concrete controls in pipelines, infrastructure and applications.
- Help define monitoring, logging and incident response for AI/LLM systems, including misuse and data-leak detection.
- Collaborate with Legal, Compliance and Procurement on AI vendor selection, risk assessments and contract reviews.
Benefits
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free