Skip to content
mimi

Offensive Security Analyst

Alignerr

Remote · France Contract $40 – $60/hr Yesterday

About the role

About The Job

At Alignerr, we partner with the world’s leading AI research teams and labs to build and train cutting‑edge AI models.

This role focuses on structured adversarial reasoning rather than exploit development. You will work with realistic attack scenarios to model how threats move through systems, where defenses fail, and how risk propagates across modern environments.

Organization: Alignerr Position: Offensive Security Analyst (Structured / Non-Exploit) Type: Contract / Task‑Based Compensation: $40–$60 /hour Location: Remote Commitment: 10–40 hours/week

What You’ll Do • Analyze attack paths, kill chains, and adversary strategies across real‑world systems • Classify weaknesses, misconfigurations, and defensive gaps • Review red‑team style scenarios and intrusion narratives • Help generate, label, and validate adversarial reasoning data used to train and evaluate AI systems

What We’re Looking For • 2+ years in pentesting, red team, or a strong blue‑team role with hands‑on attack knowledge • Understand how real attacks unfold in production environments • Ability to clearly explain attack chains, impact, and tradeoffs

Why Join Us • Competitive pay and flexible remote work. • Work directly on frontier AI systems. • Freelance perks: autonomy, flexibility, and global collaboration. • Potential for contract extension.

Application Process (Takes 10–15 min) • Submit your resume • Complete a short screening • Project matching and onboarding

PS: Our team reviews applications daily. Please complete your AI interview and application steps to be considered for this opportunity.

#J-18808-Ljbffr

Requirements

  • 2+ years in pentesting, red team, or a strong blue‑team role with hands‑on attack knowledge
  • Understand how real attacks unfold in production environments
  • Ability to clearly explain attack chains, impact, and tradeoffs

Responsibilities

  • Analyze attack paths, kill chains, and adversary strategies across real‑world systems
  • Classify weaknesses, misconfigurations, and defensive gaps
  • Review red‑team style scenarios and intrusion narratives
  • Help generate, label, and validate adversarial reasoning data used to train and evaluate AI systems

Benefits

health insurancedental insurancevision insurance

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free