Anthropic
Data Operations Manager - Computer Use & Tool Use
Anthropic, San Francisco, California, United States, 94199
Data Operations Manager - Computer Use & Tool Use
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the Role As Data Operations Manager for Computer Use & Tool Use, you will build and scale data operations that advance Claude's computer use capabilities and tool use safety. You will partner with research teams to design and execute data strategies, manage vendor relationships, and own the entire data pipeline from requirements to production. This is a zero-to-one role requiring technical depth to understand what makes high-quality training data for autonomous agents, with a focus on strategy and execution rather than hands-on engineering.
About the Impact The data strategies and operations you build will directly determine how well Claude can use tools safely, operate computers autonomously, and maintain quality across long-horizon agentic workflows. You will work with world-class researchers advancing frontier capabilities, safety, and model performance while building the operational infrastructure to scale these efforts.
We’re looking for someone excited about scaling quality for complex, multi-turn agent interactions—someone who can think strategically about data needs for both capabilities and safety, build the right partnerships, and execute flawlessly. If you thrive at the intersection of technical depth and operational excellence, we’d love to hear from you.
Responsibilities
Develop and execute data strategies for computer use, tool use safety, and agentic AI research
Partner with research leaders to translate technical requirements into operational frameworks
Build data collection and evaluation systems for complex scenarios: prompt injection robustness, multi-turn agent conversations, adversarial attacks, autonomous workflows
Scale the generation of realistic evaluation environments that capture real-world tool use and computer use challenges
Identify, evaluate, and manage specialized contractors and vendors for technical data collection
Implement quality control processes to ensure data meets training requirements for both capabilities and safety
Manage multiple complex projects simultaneously, balancing research velocity with rigorous evaluation standards
Track metrics and communicate progress to stakeholders
You may be a good fit if you
Have 3+ years in technical operations, product management, or entrepreneurial experience building from zero to scale
Have strong technical foundations - proficiency in Python and understanding of ML workflows, RL environments, and evaluation frameworks
Have strong communication skills and can effectively engage with both technical and non-technical stakeholders
Are familiar with how LLMs work and could describe concepts like RLHF, tool use, and agentic workflows
Understand the unique challenges of evaluating autonomous systems and long-horizon agent behaviors
Are highly organized and can manage multiple parallel workstreams effectively
Have a high threshold for navigating ambiguity and can balance strategic priorities with rapid execution
Thrive in fast-paced research environments with shifting priorities and novel technical challenges
Are passionate about AI safety and understand the critical importance of high-quality data in building safe, capable agentic systems
Strong candidates may also have
Experience at companies training AI models, building AI agents, or creating AI training data, evaluations, or environments
Knowledge of computer and tool use safety challenges like prompt injection, data exfiltration attempts, or adversarial attacks
Experience with RLHF, reinforcement learning techniques, or similar human-in-the-loop training methods
Domain expertise in computer use automation, security, or AI safety evaluation
Familiarity with model performance monitoring, training observability, or quality assessment systems
Track record of building and scaling operations teams
Compensation The expected base compensation for this position is below. Our total compensation package for full-time employees includes equity, benefits, and may include incentive compensation.
$250,000 - $365,000 USD
Logistics Education requirements:
We require at least a Bachelor’s degree in a related field or equivalent experience.
Location-based hybrid policy:
Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship:
We do sponsor visas. If we make you an offer, we will make every reasonable effort to help with the process, and we retain an immigration lawyer to assist.
We encourage you to apply even if you do not meet every single qualification. We value diverse perspectives and believe AI systems have significant social and ethical implications.
Equal employment opportunity: Anthropic is an equal opportunity employer. We do not discriminate on the basis of protected status.
#J-18808-Ljbffr
About the Role As Data Operations Manager for Computer Use & Tool Use, you will build and scale data operations that advance Claude's computer use capabilities and tool use safety. You will partner with research teams to design and execute data strategies, manage vendor relationships, and own the entire data pipeline from requirements to production. This is a zero-to-one role requiring technical depth to understand what makes high-quality training data for autonomous agents, with a focus on strategy and execution rather than hands-on engineering.
About the Impact The data strategies and operations you build will directly determine how well Claude can use tools safely, operate computers autonomously, and maintain quality across long-horizon agentic workflows. You will work with world-class researchers advancing frontier capabilities, safety, and model performance while building the operational infrastructure to scale these efforts.
We’re looking for someone excited about scaling quality for complex, multi-turn agent interactions—someone who can think strategically about data needs for both capabilities and safety, build the right partnerships, and execute flawlessly. If you thrive at the intersection of technical depth and operational excellence, we’d love to hear from you.
Responsibilities
Develop and execute data strategies for computer use, tool use safety, and agentic AI research
Partner with research leaders to translate technical requirements into operational frameworks
Build data collection and evaluation systems for complex scenarios: prompt injection robustness, multi-turn agent conversations, adversarial attacks, autonomous workflows
Scale the generation of realistic evaluation environments that capture real-world tool use and computer use challenges
Identify, evaluate, and manage specialized contractors and vendors for technical data collection
Implement quality control processes to ensure data meets training requirements for both capabilities and safety
Manage multiple complex projects simultaneously, balancing research velocity with rigorous evaluation standards
Track metrics and communicate progress to stakeholders
You may be a good fit if you
Have 3+ years in technical operations, product management, or entrepreneurial experience building from zero to scale
Have strong technical foundations - proficiency in Python and understanding of ML workflows, RL environments, and evaluation frameworks
Have strong communication skills and can effectively engage with both technical and non-technical stakeholders
Are familiar with how LLMs work and could describe concepts like RLHF, tool use, and agentic workflows
Understand the unique challenges of evaluating autonomous systems and long-horizon agent behaviors
Are highly organized and can manage multiple parallel workstreams effectively
Have a high threshold for navigating ambiguity and can balance strategic priorities with rapid execution
Thrive in fast-paced research environments with shifting priorities and novel technical challenges
Are passionate about AI safety and understand the critical importance of high-quality data in building safe, capable agentic systems
Strong candidates may also have
Experience at companies training AI models, building AI agents, or creating AI training data, evaluations, or environments
Knowledge of computer and tool use safety challenges like prompt injection, data exfiltration attempts, or adversarial attacks
Experience with RLHF, reinforcement learning techniques, or similar human-in-the-loop training methods
Domain expertise in computer use automation, security, or AI safety evaluation
Familiarity with model performance monitoring, training observability, or quality assessment systems
Track record of building and scaling operations teams
Compensation The expected base compensation for this position is below. Our total compensation package for full-time employees includes equity, benefits, and may include incentive compensation.
$250,000 - $365,000 USD
Logistics Education requirements:
We require at least a Bachelor’s degree in a related field or equivalent experience.
Location-based hybrid policy:
Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship:
We do sponsor visas. If we make you an offer, we will make every reasonable effort to help with the process, and we retain an immigration lawyer to assist.
We encourage you to apply even if you do not meet every single qualification. We value diverse perspectives and believe AI systems have significant social and ethical implications.
Equal employment opportunity: Anthropic is an equal opportunity employer. We do not discriminate on the basis of protected status.
#J-18808-Ljbffr