anthropic

Biological Safety Research Scientist

Apply Now

At a Glance

Location
United States
Posted
2026-02-19T10:21:06-05:00

Key Requirements

Required Skills

Data AnalysisPython

Domain Knowledge

  • Banking
  • Construction
  • Education
  • Engineering
  • Logistics

Requirements

A PhD in molecular biology, virology, microbiology, biochemistry, systems or computational biology, or a related life sciences field, OR equivalent professional experience

Extensive experience in scientific computing and data analysis, with proficiency in programming (Python preferred)

Deep expertise in modern biology, including both "reading" (e.g. high-throughput measurement, functional assays) and "writing" (gene synthesis, genome editing, strain construction, protein engineering) techniques in biology

Familiarity with dual-use research concerns, select agent regulations, and biosecurity frameworks (e.g., Biological Weapons Convention, Australia Group guidelines)

Strong analytical and writing skills, with the ability to navigate ambiguity and explain complex technical concepts to non-technical stakeholders

Have a passion for learning new skills and an ability to rapidly adapt to changing techniques and technologies

Responsibilities

We are looking for biological scientists to help build safety and oversight mechanisms for our AI systems. As a Safeguards Biological Safety Research Scientist, you will apply your technical skills to design and develop our safety systems which detect harmful behaviors and to prevent misuse by sophisticated threat actors. You will be at the forefront of defining what responsible AI safety looks like in the biological domain, working across research, policy, and engineering to translate complex biosecurity concepts into concrete technical safeguards. This is a unique opportunity to shape how frontier AI models handle dual-use biological knowledge—balancing the tremendous potential of AI to accelerate legitimate life sciences research while preventing misuse by sophisticated threat actors.

In this role, you will:

Design and execute capability evaluations ("evals") to assess the capabilities of new models

Collaborate closely with internal and external threat modeling experts to develop training data for our safety systems, and with ML engineers to train these safety systems, optimizing for both robustness against adversarial attacks and low false-positive rates for legitimate researchers

Analyze safety system performance in traffic, identifying gaps and proposing improvements

Develop rigorous stress-testing of our safeguards against evolving threats and product surfaces