Anthropic
Anthropic is hiring: Research Operations & Strategy Lead - Coding & Cybersecurit
Anthropic, San Francisco, CA, US, 94199
Overview Research Operations & Strategy Lead for Coding & Cybersecurity Data at Anthropic. Build and scale data operations that advance Claude's coding and cybersecurity capabilities. Partner with research teams to design and execute data strategies, manage vendor relationships, and own the data pipeline from requirements to production. This is a zero-to-one role requiring technical depth to understand high-quality training data, with a focus on strategy and execution rather than hands-on engineering.
Responsibilities Develop and execute data strategies for coding capabilities, cybersecurity evaluations, and agentic AI research
Partner with research leaders to translate technical requirements into operational frameworks
Build data collection and evaluation systems through internal tools, vendor partnerships, and new approaches
Identify, evaluate, and manage specialized contractors and vendors for technical data collection
Implement quality control processes to ensure data meets training requirements
Manage multiple complex projects simultaneously, balancing technical needs with delivery timelines
Track metrics and communicate progress to stakeholders
Qualifications 3+ years in technical operations, product management, or entrepreneurial experience building from zero to scale
Strong technical foundations — proficiency in Python and understanding of ML workflows and evaluation frameworks
Strong communication skills and ability to engage with both technical and non-technical stakeholders
Familiar with how LLMs work and how models like Claude are trained
Highly organized with the ability to manage multiple parallel workstreams
High tolerance for ambiguity and ability to balance strategic priorities with rapid, high-quality execution
Thrive in fast-paced research environments with shifting priorities and novel technical challenges
Passionate about AI safety and the importance of high-quality data
Strong candidates may also have Experience at companies training AI models, agents, or creating AI training data, evaluations, or environments
Knowledge of AI safety research methodologies and evaluation frameworks
Experience with RLHF or similar human-in-the-loop training methods
Domain expertise in software engineering or cybersecurity
Track record of building and scaling operations teams
Compensation & Benefits The base compensation for this position is below. Our total compensation package for full-time employees includes equity, benefits, and may include incentive compensation.
$250,000 - $365,000 USD
Logistics Education requirements: We require at least a Bachelor’s degree in a related field or equivalent experience.
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. Some roles may require more time in our offices.
Visa sponsorship: We sponsor visas, but not for every role or candidate. If an offer is made, we will make reasonable efforts to obtain a visa and may involve an immigration lawyer.
Application note: We encourage you to apply even if you do not meet every qualification. We value diverse perspectives and aim to include a range of experiences on our team.
How we’re different We believe high-impact AI research is big science conducted by a cohesive team on a few large-scale efforts. We value impact and pursue steerable, trustworthy AI. We emphasize collaboration, frequent research discussions, and strong communication skills. Our directions reflect a history of work in GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, and AI Safety.
Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a collaborative office space.
We are an equal opportunity employer. Information collected through voluntary self-identification is confidential and used for compliance purposes only.
#J-18808-Ljbffr