anthropic

Technical Policy Manager, Cyber Harms

Apply Now

At a Glance

Location
United States
Work Regime
remote
Experience
5+ years
Posted
2026-02-19T10:21:06-05:00

Key Requirements

Required Skills

Data AnalysisPython

Certifications

  • CISA
  • OSCP

Domain Knowledge

  • Banking
  • Cybersecurity
  • Defense
  • Education
  • Logistics

Requirements

5+ years of hands-on experience in cybersecurity, with deep expertise in areas such as vulnerability research, exploit development, network security, malware analysis, or penetration testing

Experience in scientific computing and data analysis, with proficiency in programming (Python preferred)

Deep expertise in modern cybersecurity, including both offensive techniques (vulnerability research, exploit development, penetration testing, malware analysis) and defensive measures (detection, monitoring, incident response)

Demonstrated ability to create threat models and translate technical cyber risks into policy frameworks

Familiarity with responsible disclosure practices, vulnerability coordination, and cybersecurity frameworks (e.g., MITRE ATT&CK, NIST Cybersecurity Framework, CWE/CVE systems)

Experience developing policies or guidelines at scale, balancing safety concerns with enabling legitimate use cases

Responsibilities

We are looking for a cybersecurity expert to lead our efforts to prevent AI misuse in the cyber domain.

As a Cyber Harms Technical Policy Manager, you will lead a team applying deep technical expertise to inform the design of safety systems that detect harmful cyber behaviors and prevent misuse by sophisticated threat actors.

Working closely with Research Engineers who build these safety systems, you and your team will provide the critical cybersecurity domain knowledge needed to ensure our safeguards are effective against real-world threats.

You will be at the forefront of defining what responsible AI safety looks like in the cybersecurity domain, working across research, policy, and engineering to translate complex cyber threat concepts into concrete technical safeguards and actionable policies.

This is a unique opportunity to shape how frontier AI models handle dual-use cybersecurity knowledge—balancing the tremendous potential of AI to advance legitimate security research and defensive capabilities while preventing misuse by malicious actors.