cohere
Product Manager, Safety Research
At a Glance
- Location
- United States
- Employment
- FULL_TIME
- Experience
- 5+ years
- Department
- Cohere
- Posted
- 2026-03-11
Key Requirements
Domain Knowledge
- Engineering
- Regulatory
Benefits & Perks
ing stipend ✈️ 6 weeks of vacation (30 working days!)
Requirements
5+ years of product management or research operations experience, with meaningful time working alongside research or ML teams at a technology or AI company.
Technical depth sufficient to engage credibly with safety researchers: you don't need to run evals yourself, but you need to understand what they mean and ask the right questions.
Genuine interest in AI safety and model behavior, including the real-world implications of deploying LLMs in enterprise contexts.
Comfortable operating in ambiguity — safety research surfaces unexpected findings, and this role requires good judgment about what to act on and how fast.
Able to work across researchers, engineers, and product teams and keep everyone aligned without flattening the nuance of what the research is actually saying.
Strong written communicator who can translate complex model behavior findings for non-technical audiences and knows when something needs to be escalated urgently.
Responsibilities
We are seeking a Safety Research PM to bridge Cohere's AI safety research and the North product. This role sits at the intersection of model research and product delivery — you'll work directly with Cohere's modeling and safety research teams to understand how our models behave, where they fall short, and how those insights translate into concrete safety features and guardrails within North.
This isn't a traditional PM role. You'll spend as much time reading evaluations and engaging with researchers as you will writing PRDs. The right person is intellectually curious, comfortable with ambiguity, and has the technical depth to engage seriously with model behavior research while also having the product instincts to know what to do with it.
Serve as the product bridge between Cohere's safety research teams and North, ensuring that findings from model evaluations, red-teaming, and behavioral research translate into product-level guardrails, controls, and safeguards.
Own the safety product roadmap for Cohere and North, prioritizing features based on research findings, observed misuse patterns, evolving threat vectors, and customer requirements.
Partner with modeling teams to scope and interpret safety evaluations — understanding how Cohere’s underlying models behave across adversarial inputs, edge cases, and high-stakes use cases.
About the Company
Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.
We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.
Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.
Join us on our mission and shape the future!
Cohere is revolutionizing enterprise AI with