Principal Security Engineer, AI Security
Lila Sciences
About the role
Your Impact at LILA
As a Principal Security Engineer focused on AI Security, you will define and drive the technical strategy for securing how AI is used across Lila's enterprise. You will operate as a senior individual contributor, partnering with IT and business teams to ensure safe and compliant adoption of AI tools and platforms.
While Lila builds AI-powered systems, this role is primarily focused on securing the use of third-party and internally deployed AI tools across the enterprise — ensuring sensitive data, intellectual property, and scientific workflows are protected as AI becomes deeply embedded in how work gets done.
What You'll Be Building
- Enterprise AI Security Strategy — Define and implement security controls and guardrails for the use of AI tools (e.g., LLM APIs, SaaS AI platforms, and internal AI services) across the organization.
- AI Gateway & Agentic Gateway Security — Design and implement AI gateway controls to manage and monitor access to external and internal AI systems. Secure agentic workflows by enforcing identity, authorization, tool-use constraints, and policy controls for autonomous or semi-autonomous agents.
- AI Red Teaming & Adversarial Testing — Conduct red teaming and adversarial testing focused on enterprise AI usage, including prompt injection, data exfiltration, jailbreaks, and abuse of connected tools and plugins.
- Data Protection for AI Usage — Develop and enforce controls to prevent sensitive data leakage through AI systems, including input/output filtering, data classification, tokenization, and secure handling of prompts, embeddings, and outputs.
- Multi-Layer AI Security (Network, Endpoint, Data) — Integrate AI security into existing enterprise security layers: network visibility and control over AI service access, API traffic inspection, and zero trust enforcement; endpoint security for developer machines, research environments, browsers, and plugins; data layer controls ensuring proper handling of sensitive data when interacting with AI systems.
- AI Threat Modeling (Enterprise Context) — Develop threat models focused on enterprise AI usage, including risks such as data leakage, prompt injection, model misuse, supply chain risks from AI vendors, and unauthorized agent actions.
- Vendor & Platform Security — Assess and guide secure adoption of third-party AI vendors and platforms, including evaluating data handling practices, model behavior, and integration risks.
- Incident Response for AI Usage — Define and support response approaches for AI-related incidents, such as sensitive data exposure, policy violations, or misuse of AI tools.
- Cross-Functional Technical Leadership — Partner with Legal, Compliance, IT, and Engineering to align AI usage with regulatory requirements, data governance policies, and responsible AI practices.
- Security Enablement — Contribute to internal guidance and education on safe AI usage, including secure prompting, data handling, and appropriate use of AI tools.
- Security Tooling & Implementation — Evaluate and implement tooling for AI security, including AI gateways, DLP integrations, monitoring solutions, and policy enforcement mechanisms.
What You’ll Need To Succeed
- 8+ years of experience in information security, with strong expertise in enterprise, cloud, or application security.
- Hands-on experience designing and implementing security controls in enterprise environments.
- Familiarity with AI/ML systems and how modern AI tools (LLMs, copilots, APIs) are used in practice.
- Experience with cloud platforms (AWS/GCP), SaaS security, and zero trust architectures.
- Experience with data protection technologies (e.g., DLP, data classification, access controls).
- Practical experience with threat modeling, red teaming, or adversarial testing.
- Strong communication and influence skills across technical and non-technical stakeholders.
Bonus Points For
- Experience securing enterprise use of LLMs, copilots, or generative AI platforms.
- Familiarity with AI gateways, prompt filtering, or model interaction controls.
- Experience evaluating or securing third-party AI vendors and APIs.
- Background in regulated environments (biotech, healthcare, defense, or government).
- Experience with browser security, endpoint controls, or SaaS security platforms.
- Knowledge of privacy-enhancing technologies or confidential computing.
- Contributions to AI/ML security research or community.
Compensation
We offer competitive compensation including bonus potential and generous early equity. The final offer will reflect your unique background, expertise, and impact.
Expected Base Salary Range: $171,000 USD - $230,534 USD
About LILA
Lila Sciences is building Scientific Superintelligence™ to solve humankind's greatest challenges. We believe science is the most inspiring frontier for AI. Rather than hard-coding expert knowledge into tools, LILA builds systems that can learn for themselves.
LILA combines advanced AI models with proprietary AI Science Factory™ instruments into an operating system for science that executes the entire scientific method autonomously, accelerating discovery at unprecedented speed, scale, and impact across medicine, materials, and energy. Learn more at www.lila.ai.
Guided by our core values of truth, trust, curiosity, grit, and velocity, we move with startup speed while tackling problems of historic importance. If this sounds like an environment you'd love to work in, even if you don't meet every qualification listed above, we encourage you to apply.
We’re All In
Lila Sciences is committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status.
Information you provide during your application process will be handled in accordance with our Candidate Privacy Policy.
A Note to Agencies
Lila Sciences does not accept unsolicited resumes from any source other than candidates. The submission of unsolicited resumes by recruitment or staffing agencies to Lila Sciences or its employees is strictly prohibited unless contacted directly by Lila Science’s internal Talent Acquisition team. Any resume submitted by an agency in the absence of a signed agreement will automatically become the property of Lila Sciences, and Lila Sciences will not owe any referral or other fees with respect thereto.
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free