Anthropic
[Expression of Interest] Research Scientist/Engineer, Alignment Finetuning
Anthropic, San Francisco, California, United States, 94199
[Expression of Interest] Research Scientist/Engineer, Alignment Finetuning
Join to apply for the [Expression of Interest] Research Scientist/Engineer, Alignment Finetuning role at Anthropic.
About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About The Role As a Research Scientist/Engineer on the Alignment Finetuning team at Anthropic, you'll lead the development and implementation of techniques aimed at training language models that are more aligned with human values: that demonstrate better moral reasoning, improved honesty, and good character. You'll work to develop novel finetuning techniques and to use these to demonstrably improve model behavior.
Note: For this role, we conduct all interviews in Python. We have filled our headcount for 2025. However, we are leaving this form open as an expression of interest since we expect to be growing the team in the future, and we will review your application when we do. As such, you may not hear back on your application to this team until the new year.
Responsibilities
Develop and implement novel finetuning techniques using synthetic data generation and advanced training pipelines
Use these to train models to have better alignment properties including honesty, character, and harmlessness
Create and maintain evaluation frameworks to measure alignment properties in models
Collaborate across teams to integrate alignment improvements into production models
Develop processes to help automate and scale the work of the team
You May Be a Good Fit If You
Have an MS/PhD in Computer Science, ML, or related field, or equivalent experience
Possess strong programming skills, especially in Python
Have experience with ML model training and experimentation
Have a track record of implementing ML research
Demonstrate strong analytical skills for interpreting experimental results
Have experience with ML metrics and evaluation frameworks
Excel at turning research ideas into working code
Can identify and resolve practical implementation challenges
Strong Candidates May Also Have
Experience with language model finetuning
Background in AI alignment research
Published work in ML or alignment
Experience with synthetic data generation
Familiarity with techniques like RLHF, constitutional AI, and reward modeling
Track record of designing and implementing novel training approaches
Experience with model behavior evaluation and improvement
Annual Salary $315,000—$340,000 USD
Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. We encourage you to apply even if you do not believe you meet every single qualification.
How We're Different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
#J-18808-Ljbffr
About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About The Role As a Research Scientist/Engineer on the Alignment Finetuning team at Anthropic, you'll lead the development and implementation of techniques aimed at training language models that are more aligned with human values: that demonstrate better moral reasoning, improved honesty, and good character. You'll work to develop novel finetuning techniques and to use these to demonstrably improve model behavior.
Note: For this role, we conduct all interviews in Python. We have filled our headcount for 2025. However, we are leaving this form open as an expression of interest since we expect to be growing the team in the future, and we will review your application when we do. As such, you may not hear back on your application to this team until the new year.
Responsibilities
Develop and implement novel finetuning techniques using synthetic data generation and advanced training pipelines
Use these to train models to have better alignment properties including honesty, character, and harmlessness
Create and maintain evaluation frameworks to measure alignment properties in models
Collaborate across teams to integrate alignment improvements into production models
Develop processes to help automate and scale the work of the team
You May Be a Good Fit If You
Have an MS/PhD in Computer Science, ML, or related field, or equivalent experience
Possess strong programming skills, especially in Python
Have experience with ML model training and experimentation
Have a track record of implementing ML research
Demonstrate strong analytical skills for interpreting experimental results
Have experience with ML metrics and evaluation frameworks
Excel at turning research ideas into working code
Can identify and resolve practical implementation challenges
Strong Candidates May Also Have
Experience with language model finetuning
Background in AI alignment research
Published work in ML or alignment
Experience with synthetic data generation
Familiarity with techniques like RLHF, constitutional AI, and reward modeling
Track record of designing and implementing novel training approaches
Experience with model behavior evaluation and improvement
Annual Salary $315,000—$340,000 USD
Logistics Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. We encourage you to apply even if you do not believe you meet every single qualification.
How We're Different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
#J-18808-Ljbffr