Anthropic
Senior/Staff Software Engineer, Inference
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role
Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry's largest compute‑agnostic inference deployments and are responsible for the entire stack from intelligent request routing to fleet‑wide orchestration across diverse AI accelerators. The team has a dual mandate:
maximizing compute efficiency
to serve our explosive customer growth, while
enabling breakthrough research
by giving our scientists the high‑performance inference infrastructure they need to develop next‑generation models. You may be a good fit if you:
Have significant software engineering experience, particularly with distributed systems Are results‑oriented, with a bias towards flexibility and impact Pick up slack, even if it goes outside your job description Enjoy pair programming (we love to pair!) Want to learn more about machine learning systems and infrastructure Thrive in environments where technical excellence directly drives both business results and research breakthroughs Care about the societal impacts of your work Strong candidates may also have experience with:
Implementing and deploying machine learning systems at scale Load balancing, request routing, or traffic management systems LLM inference optimization, batching, and caching strategies Kubernetes and cloud infrastructure (AWS, GCP) Python or Rust Representative projects:
Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads Building production‑grade deployment pipelines for releasing new models to millions of users Integrating new AI accelerator platforms to maintain our hardware‑agnostic competitive advantage Contributing to new inference features (e.g., structured sampling, prompt caching) Analyzing observability data to tune performance based on real‑world production workloads Managing multi‑region deployments and geographic routing for global customers Deadline to apply:
None. Applications will be reviewed on a rolling basis. Compensation:
$300,000 - $485,000 USD Logistics
Education requirements:
We require at least a Bachelor's degree in a related field or equivalent experience. Location‑based hybrid policy:
Currently, we expect all staff to be in one of our offices at least 25% of the time. Some roles may require more time in our offices. Visa sponsorship:
We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. If we make you an offer, we will make every reasonable effort to secure the visa. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. How we're different
We believe that the highest‑impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large‑scale research efforts and value impact—advancing our long‑term goals of steerable, trustworthy AI—over working on smaller, more specific puzzles. We host frequent research discussions to ensure that we are pursuing the highest‑impact work, and we value strong communication skills. Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage:
Learn about our policy for using AI in our application process. We are an equal‑employment‑opportunity employer. We do not discriminate on the basis of any protected status under any applicable law.
#J-18808-Ljbffr
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role
Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry's largest compute‑agnostic inference deployments and are responsible for the entire stack from intelligent request routing to fleet‑wide orchestration across diverse AI accelerators. The team has a dual mandate:
maximizing compute efficiency
to serve our explosive customer growth, while
enabling breakthrough research
by giving our scientists the high‑performance inference infrastructure they need to develop next‑generation models. You may be a good fit if you:
Have significant software engineering experience, particularly with distributed systems Are results‑oriented, with a bias towards flexibility and impact Pick up slack, even if it goes outside your job description Enjoy pair programming (we love to pair!) Want to learn more about machine learning systems and infrastructure Thrive in environments where technical excellence directly drives both business results and research breakthroughs Care about the societal impacts of your work Strong candidates may also have experience with:
Implementing and deploying machine learning systems at scale Load balancing, request routing, or traffic management systems LLM inference optimization, batching, and caching strategies Kubernetes and cloud infrastructure (AWS, GCP) Python or Rust Representative projects:
Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads Building production‑grade deployment pipelines for releasing new models to millions of users Integrating new AI accelerator platforms to maintain our hardware‑agnostic competitive advantage Contributing to new inference features (e.g., structured sampling, prompt caching) Analyzing observability data to tune performance based on real‑world production workloads Managing multi‑region deployments and geographic routing for global customers Deadline to apply:
None. Applications will be reviewed on a rolling basis. Compensation:
$300,000 - $485,000 USD Logistics
Education requirements:
We require at least a Bachelor's degree in a related field or equivalent experience. Location‑based hybrid policy:
Currently, we expect all staff to be in one of our offices at least 25% of the time. Some roles may require more time in our offices. Visa sponsorship:
We do sponsor visas! However, we aren’t able to successfully sponsor visas for every role and every candidate. If we make you an offer, we will make every reasonable effort to secure the visa. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. How we're different
We believe that the highest‑impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large‑scale research efforts and value impact—advancing our long‑term goals of steerable, trustworthy AI—over working on smaller, more specific puzzles. We host frequent research discussions to ensure that we are pursuing the highest‑impact work, and we value strong communication skills. Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage:
Learn about our policy for using AI in our application process. We are an equal‑employment‑opportunity employer. We do not discriminate on the basis of any protected status under any applicable law.
#J-18808-Ljbffr