Logo
Anthropic

[Expression of Interest] Research Scientist/Engineer, Honesty

Anthropic, San Francisco, California, United States, 94199

Save Job

[Expression of Interest] Research Scientist/Engineer, Honesty

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role

As a Research Scientist/Engineer focused on honesty within the Finetuning Alignment team, you\'ll spearhead the development of techniques to minimize hallucinations and enhance truthfulness in language models. Your work will focus on creating robust systems that are accurate and reflect their true levels of confidence across all domains, and that work to avoid being deceptive or misleading. Your work will be critical for ensuring our models maintain high standards of accuracy and honesty across diverse domains. Note: The team is based in New York and we prefer candidates who can be based in New York. For this role, we conduct all interviews in Python. We have filled our headcount for 2025. However, we are leaving this form open as an expression of interest since we expect to be growing the team in the future, and we will review your application when we do. You may not hear back on your application to this team until the new year. Responsibilities

Design and implement novel data curation pipelines to identify, verify, and filter training data for accuracy given the model\'s knowledge Develop specialized classifiers to detect potential hallucinations or miscalibrated claims made by the model Create and maintain comprehensive honesty benchmarks and evaluation frameworks Implement techniques to ground model outputs in verified information, such as search and retrieval-augmented generation (RAG) systems Design and deploy human feedback collection specifically for identifying and correcting miscalibrated responses Design and implement prompting pipelines to generate data that improves model accuracy and honesty Develop and test novel RL environments that reward truthful outputs and penalize fabricated claims Create tools to help human evaluators efficiently assess model outputs for accuracy You may be a good fit if you

Have an MS/PhD in Computer Science, ML, or related field Possess strong programming skills in Python Have industry experience with language model finetuning and classifier training Show proficiency in experimental design and statistical analysis for measuring improvements in calibration and accuracy Care about AI safety and the accuracy and honesty of both current and future AI systems Have experience in data science or the creation and curation of datasets for finetuning LLMs An understanding of various metrics of uncertainty, calibration, and truthfulness in model outputs Strong candidates may also have

Published work on hallucination prevention, factual grounding, or knowledge integration in language models Experience with fact-grounding techniques Background in developing confidence estimation or calibration methods for ML models A track record of creating and maintaining factual knowledge bases Familiarity with RLHF specifically applied to improving model truthfulness Worked with crowd-sourcing platforms and human feedback collection systems Experience developing evaluations of model accuracy or hallucinations Join us in our mission to ensure advanced AI systems behave reliably and ethically while staying aligned with human values. The expected base compensation for this position is below. Our total compensation package for full-time employees includes equity, benefits, and may include incentive compensation. $315,000 - $340,000 USD Logistics

Education requirements: We require at least a Bachelor\'s degree in a related field or equivalent experience. Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We sponsor visas. If we make you an offer, we will help with the process and retain an immigration lawyer to assist. We encourage you to apply even if you do not meet every single qualification. Not all strong candidates will meet every listed qualification. We value diverse perspectives and strive to include a range of experiences on our team. How we\'re different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on a few large-scale research efforts. We value impact — advancing our long-term goals of steerable, trustworthy AI — and view AI research as an empirical science. We are highly collaborative and value strong communication skills. For more context, you can read about our recent directions and research topics we continue to advance. Come work with us

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a welcoming office environment. Guidance on candidates\' AI usage is provided in our application policy. Apply for this job

We support an applicant-friendly process and encourage you to submit your materials. If you\'re ready to apply, please provide the following information in your application: your name, contact details, resume/CV (or LinkedIn), any publications or relevant links, and responses to key questions about your fit and approach to reducing hallucinations. You will be prompted to answer role-specific questions as part of the application.

#J-18808-Ljbffr