anthropic

Safeguards Analyst, Human Exploitation & Abuse

Apply Now

At a Glance

Location
Friendly (Travel-Required) | San Francisco, District of Columbia, United States
Work Regime
remote
Experience
3+ years
Posted
2026-03-19T16:20:55-04:00

Key Requirements

Required Skills

Data AnalysisPythonSQL

Domain Knowledge

  • Banking
  • Education
  • Logistics

Requirements

Subject matter expertise in one or more of: human trafficking, human exploitation and abuse, sextortion, image-based sexual abuse / non-consensual intimate imagery, or commercial sexual exploitation

Experience building or operating detection and review workflows for sensitive content, at a platform, NGO, hotline, or similar organization

Ability to use SQL, Python, and/or other data analysis tools to interact with large datasets and derive insights that support key decisions and recommendations

Familiarity with the NGO and industry ecosystem working on these harms (for example, Polaris Project, Thorn, NCMEC, IWF, StopNCII, or industry hash-sharing initiatives)

Experience conducting open-source investigations or threat actor profiling in a trust & safety, intelligence, or law enforcement context

Experience working with generative AI products, including writing effective prompts for content review and enforcement

Compensation & Benefits

For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.

Annual Salary:

$245,000

$285,000 USD

Logistics

Responsibilities

As a Safeguards Analyst focusing on human exploitation and abuse, you will be responsible for building and executing enforcement workflows that detect and mitigate the use of our products to facilitate human trafficking, sextortion, image-based sexual abuse, bullying, and harassment.

As a member of the user well-being team, your initial focus will be on standing up detection, review, and escalation workflows for this domain — from tuning classifiers and curating evaluation datasets through to managing external partnerships and real-world harm escalation pathways.

Safety is core to our mission, and you'll help shape policy enforcement so that our users can interact with and build on top of our products across all surfaces in a harmless, helpful, and honest way.

Design and architect automated enforcement systems and review workflows for human exploitation and abuse, ensuring they scale effectively while maintaining high accuracy

Partner with Product, Engineering, and Data Science teams to build and tune detection signals for human trafficking, sextortion, and image-based sexual abuse, and to develop custom mitigations for these sensitive policy areas

Curate policy violation examples, maintain golden evaluation datasets, and track enforcement actions across both consumer and API surfaces

About the Company

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.