Logo
algojobs

Software Engineer, Inference

algojobs, San Francisco, California, United States, 94199

Save Job

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role

Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry’s largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators. The team has a dual mandate:

maximizing compute efficiency

to serve our explosive customer growth, while

enabling breakthrough research

by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.

You may be a good fit if you:

Have significant software engineering experience, particularly with distributed systems

Are results-oriented, with a bias towards flexibility and impact

Pick up slack, even if it goes outside your job description

Enjoy pair programming (we love to pair!)

Want to learn more about machine learning systems and infrastructure

Thrive in environments where technical excellence directly drives both business results and research breakthroughs

Care about the societal impacts of your work

Strong candidates may also have experience with:

Implementing and deploying machine learning systems at scale

Load balancing, request routing, or traffic management systems

LLM inference optimization, batching, and caching strategies

Kubernetes and cloud infrastructure (AWS, GCP)

Python or Rust

Representative projects:

Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators

Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads

Building production-grade deployment pipelines for releasing new models to millions of users

Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage

Contributing to new inference features (e.g., structured sampling, prompt caching)

Analyzing observability data to tune performance based on real-world production workloads

Managing multi-region deployments and geographic routing for global customers

Deadline to apply

None. Applications will be reviewed on a rolling basis.

Compensation

The expected base compensation for this position is below. Our total compensation package for full-time employees includes equity, benefits, and may include incentive compensation. $300,000 - $485,000 USD

Logistics

Education requirements:

We require at least a Bachelor’s degree in a related field or equivalent experience.

Location-based hybrid policy:

Currently, we expect all staff to be in one of our offices at least 25% of the time. Some roles may require more time in offices.

Visa sponsorship:

We do sponsor visas. If we make you an offer, we will make reasonable efforts to obtain a visa, with support from an immigration lawyer.

How we’re different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on a few large-scale research efforts with a focus on impact and steerable, trustworthy AI. We value collaboration, communication, and empirical progress in AI research.

#J-18808-Ljbffr