Anthropic
[Expression of Interest] Research Scientist/Engineer, Honesty
Anthropic, San Francisco, California, United States, 94199
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About The Role As a Research Scientist/Engineer focused on honesty within the Finetuning Alignment team, you’ll spearhead the development of techniques to minimize hallucinations and enhance truthfulness in language models. Your work will focus on creating robust systems that are accurate and reflect their true levels of confidence across all domains, and that work to avoid being deceptive or misleading. Your work will be critical for ensuring our models maintain high standards of accuracy and honesty across diverse domains.
Note:
The team is based in New York and so we have a preference for candidates who can be based in New York. For this role, we conduct all interviews in Python. We have filled our headcount for 2025. However, we are leaving this form open as an expression of interest since we expect to be growing the team in the future, and we will review your application when we do. As such, you may not hear back on your application to this team until the new year.
Responsibilities
Design and implement novel data curation pipelines to identify, verify, and filter training data for accuracy given the model’s knowledge.
Develop specialized classifiers to detect potential hallucinations or miscalibrated claims made by the model.
Create and maintain comprehensive honesty benchmarks and evaluation frameworks.
Implement techniques to ground model outputs in verified information, such as search and retrieval-augmented generation (RAG) systems.
Design and deploy human feedback collection specifically for identifying and correcting miscalibrated responses.
Design and implement prompting pipelines to generate data that improves model accuracy and honesty.
Develop and test novel RL environments that reward truthful outputs and penalize fabricated claims.
Create tools to help human evaluators efficiently assess model outputs for accuracy.
You May Be a Good Fit If You
Have an MS/PhD in Computer Science, ML, or related field.
Possess strong programming skills in Python.
Have industry experience with languageetuning and classifier training.
Show proficiency in experimental design and statistical analysis for measuring improvements in calibration and accuracy.
Care about AI safety and the accuracy and honesty of both current and future AI systems.
Have experience in data science or the creation and curation of datasets for finetuning LLMs.
Have an understanding of various metrics of uncertainty, calibration, and truthfulness in model outputs.
Strong Candidates May Also Have
Published work on hallucination prevention, factual grounding, or knowledge integration in language models.
Experience with fact-grounding techniques.
Background in developing confidence estimation or calibration methods for ML models.
A track record of creating and maintaining factual knowledge bases.
Familiarity with RLHF specifically applied to improving model truthfulness.
Worked with crowd-sourcing platforms and human feedback collection systems.
Experience developing evaluations of model accuracy or hallucinations.
Annual Salary $315,000—$340,000 USD
Logistics Location: New York (hybrid policy – 25% office presence; may require more time in office).
Education requirements: At least a Bachelor’s degree in a related field or equivalent experience.
Visa sponsorship: We sponsor visas. We will make every reasonable effort to obtain a visa if you are offered an offer.
How We’re Different We believe that the highest-impact AI research will be big science. We work as a single cohesive team on just a few large-scale research efforts. We value impact — advancing our long-term goals of steerable, trustworthy AI — rather than smaller puzzles. We are extremely collaborative and hold frequent research discussions to ensure we pursue the highest-impact work at all times. We value communication skills.
Come Work With Us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space to collaborate with colleagues.
Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process.
#J-18808-Ljbffr
About The Role As a Research Scientist/Engineer focused on honesty within the Finetuning Alignment team, you’ll spearhead the development of techniques to minimize hallucinations and enhance truthfulness in language models. Your work will focus on creating robust systems that are accurate and reflect their true levels of confidence across all domains, and that work to avoid being deceptive or misleading. Your work will be critical for ensuring our models maintain high standards of accuracy and honesty across diverse domains.
Note:
The team is based in New York and so we have a preference for candidates who can be based in New York. For this role, we conduct all interviews in Python. We have filled our headcount for 2025. However, we are leaving this form open as an expression of interest since we expect to be growing the team in the future, and we will review your application when we do. As such, you may not hear back on your application to this team until the new year.
Responsibilities
Design and implement novel data curation pipelines to identify, verify, and filter training data for accuracy given the model’s knowledge.
Develop specialized classifiers to detect potential hallucinations or miscalibrated claims made by the model.
Create and maintain comprehensive honesty benchmarks and evaluation frameworks.
Implement techniques to ground model outputs in verified information, such as search and retrieval-augmented generation (RAG) systems.
Design and deploy human feedback collection specifically for identifying and correcting miscalibrated responses.
Design and implement prompting pipelines to generate data that improves model accuracy and honesty.
Develop and test novel RL environments that reward truthful outputs and penalize fabricated claims.
Create tools to help human evaluators efficiently assess model outputs for accuracy.
You May Be a Good Fit If You
Have an MS/PhD in Computer Science, ML, or related field.
Possess strong programming skills in Python.
Have industry experience with languageetuning and classifier training.
Show proficiency in experimental design and statistical analysis for measuring improvements in calibration and accuracy.
Care about AI safety and the accuracy and honesty of both current and future AI systems.
Have experience in data science or the creation and curation of datasets for finetuning LLMs.
Have an understanding of various metrics of uncertainty, calibration, and truthfulness in model outputs.
Strong Candidates May Also Have
Published work on hallucination prevention, factual grounding, or knowledge integration in language models.
Experience with fact-grounding techniques.
Background in developing confidence estimation or calibration methods for ML models.
A track record of creating and maintaining factual knowledge bases.
Familiarity with RLHF specifically applied to improving model truthfulness.
Worked with crowd-sourcing platforms and human feedback collection systems.
Experience developing evaluations of model accuracy or hallucinations.
Annual Salary $315,000—$340,000 USD
Logistics Location: New York (hybrid policy – 25% office presence; may require more time in office).
Education requirements: At least a Bachelor’s degree in a related field or equivalent experience.
Visa sponsorship: We sponsor visas. We will make every reasonable effort to obtain a visa if you are offered an offer.
How We’re Different We believe that the highest-impact AI research will be big science. We work as a single cohesive team on just a few large-scale research efforts. We value impact — advancing our long-term goals of steerable, trustworthy AI — rather than smaller puzzles. We are extremely collaborative and hold frequent research discussions to ensure we pursue the highest-impact work at all times. We value communication skills.
Come Work With Us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space to collaborate with colleagues.
Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process.
#J-18808-Ljbffr