Research Engineer, Frontier Safety Mitigations, DeepMind
Deepmind
About the role
Location & Fair Chance Ordinance
Applicants in San Francisco: Qualified applications with arrest or conviction records will be considered for employment in accordance with the San Francisco Fair Chance Ordinance for Employers and the California Fair Chance Act.
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Mountain View, CA, USA; San Francisco, CA, USA.
Minimum Qualifications
- Bachelor’s degree or equivalent practical experience.
- 5 years of experience with software development in one or more programming languages.
- 3 years of experience testing, maintaining, or launching software products, and 1 year of experience with software design and architecture.
Preferred Qualifications
- Master's degree or PhD in Computer Science or a related technical field.
- Experience in Large Language Model (LLM) development, fine-tuning, or safety evaluation methodologies.
- Experience taking research from concept to product.
- Experience in areas such as Safety and Alignment.
- Familiarity with concepts like the Frontier Safety Framework, adversarial attacks, and in-model/out-of-model mitigation strategies.
- Ability to build large-scale research or engineering systems.
About the Job
The goal of our frontier safety mitigations work is to de-risk model launches by researching and implementing defenses against high-stakes frontier safety risks, particularly those coming from misuse that could make a model tangibly dangerous as model capabilities increase.
In this role, you will be accountable for the safety and behavior of Google DeepMind’s (GDM) latest Gemini models. You will focus on critical domains such as CBRN (Chemical, Biological, Radiological, Nuclear), Cybersecurity, and Harmful Manipulation and will make sure that our mitigations are still enabling the beneficial use of our technology. You will employ a wide range of methods, from building novel evaluations to red‑teaming, researching and deploying advanced mitigations, monitoring emerging risks, and contributing to model development.
Artificial intelligence will be one humanity’s most transformative inventions. At Google DeepMind, we are a pioneering AI lab with exceptional interdisciplinary teams focused on advancing AI development to solve complex global challenges and accelerate high‑quality product innovation for billions of users. We use our technologies for widespread public benefit and scientific discovery, ensuring safety and ethics are always our highest priority.
We are pushing the boundaries across multiple domains. Our global teams offer diverse learning opportunities and varied career pathways for those driven to achieve exceptional results through collective effort.
Compensation
The US base salary range for this full‑time position is $174,000‑$252,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job‑related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Responsibilities
- Develop high quality evaluations that capture risks arising in frontier safety domains, such as cybersecurity and CBRN.
- Design, implement, and productionalize a range of safety mitigations, including in‑model approaches (e.g., Supervised Fine‑Tuning (SFT) and Reinforcement Learning (RL) training recipes) and out‑of‑model solutions (e.g., logging, monitoring).
- Own the data and evaluation pipeline to measure the effectiveness of our mitigations.
Equal Opportunity
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Requirements
- Bachelor’s degree or equivalent practical experience
- 5 years of experience with software development in one or more programming languages
- 3 years of experience testing, maintaining, or launching software products, and 1 year of experience with software design and architecture
Responsibilities
- In this role, you will be accountable for the safety and behavior of Google DeepMind’s (GDM) latest Gemini models
- You will focus on critical domains such as CBRN (Chemical, Biological, Radiological, Nuclear), Cybersecurity, and Harmful Manipulation and will make sure that our mitigations are still enabling the beneficial use of our technology
- You will employ a wide range of methods, from building novel evaluations to red-teaming, researching and deploying advanced mitigations, monitoring emerging risks, and contributing to model development
- Develop high quality evaluations that capture risks arising in frontier safety domains, such as cybersecurity and CBRN
- Design, implement, and productionalize a range of safety mitigations, including in-model approaches (e.g., Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) training recipes) and out-of-model solutions (e.g., logging, monitoring)
- Own the data and evaluation pipeline to measure the effectiveness of our mitigations
Benefits
Skills
Don't send a generic resume
Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.
Get started free