OpenAI
Research Engineer / Research Scientist, Alignment
Join to apply for the
Research Engineer / Research Scientist, Alignment
role at
OpenAI . About The Team
The Alignment team at OpenAI is dedicated to ensuring that our AI systems are safe, trustworthy, and aligned with human values, even as they scale in complexity and capability. Our work focuses on developing methodologies for AI to follow human intent across various scenarios, including adversarial and high-stakes situations. We aim to address the most pressing challenges, ensuring our models are prepared for real-world deployment. About The Role
As a Research Engineer / Research Scientist on the Alignment team, you will work on ensuring AI systems follow human intent in complex scenarios. Your responsibilities include designing scalable solutions for AI alignment, integrating human oversight, and developing new evaluation methods. This role is based in San Francisco, CA, with a hybrid work model (3 days in-office) and relocation assistance available. Responsibilities
Develop and evaluate alignment capabilities that are subjective, context-dependent, and hard to measure. Design evaluations to measure risks and alignment with human values. Build tools to study model robustness in different situations. Design experiments to understand how alignment scales with compute, data, and adversarial resources. Develop new Human-AI interaction paradigms and oversight methods. Train models to be calibrated on correctness and risk. Innovate in AI alignment research approaches. Qualifications
PhD or equivalent in computer science, computational science, data science, cognitive science, or related fields. Strong engineering skills, especially in large-scale machine learning systems (e.g., PyTorch). Deep understanding of alignment algorithms and techniques. Experience with data visualization or data collection interfaces (e.g., TypeScript, Python). Enthusiasm for fast-paced, collaborative research environments. Interest in developing trustworthy, safe, and reliable AI models in high-stakes scenarios. About OpenAI
OpenAI is committed to ensuring that artificial general intelligence benefits all of humanity. We push AI capabilities safely and seek diverse perspectives to shape the future of AI technology. We are an equal opportunity employer and provide accommodations for applicants with disabilities.
#J-18808-Ljbffr
Join to apply for the
Research Engineer / Research Scientist, Alignment
role at
OpenAI . About The Team
The Alignment team at OpenAI is dedicated to ensuring that our AI systems are safe, trustworthy, and aligned with human values, even as they scale in complexity and capability. Our work focuses on developing methodologies for AI to follow human intent across various scenarios, including adversarial and high-stakes situations. We aim to address the most pressing challenges, ensuring our models are prepared for real-world deployment. About The Role
As a Research Engineer / Research Scientist on the Alignment team, you will work on ensuring AI systems follow human intent in complex scenarios. Your responsibilities include designing scalable solutions for AI alignment, integrating human oversight, and developing new evaluation methods. This role is based in San Francisco, CA, with a hybrid work model (3 days in-office) and relocation assistance available. Responsibilities
Develop and evaluate alignment capabilities that are subjective, context-dependent, and hard to measure. Design evaluations to measure risks and alignment with human values. Build tools to study model robustness in different situations. Design experiments to understand how alignment scales with compute, data, and adversarial resources. Develop new Human-AI interaction paradigms and oversight methods. Train models to be calibrated on correctness and risk. Innovate in AI alignment research approaches. Qualifications
PhD or equivalent in computer science, computational science, data science, cognitive science, or related fields. Strong engineering skills, especially in large-scale machine learning systems (e.g., PyTorch). Deep understanding of alignment algorithms and techniques. Experience with data visualization or data collection interfaces (e.g., TypeScript, Python). Enthusiasm for fast-paced, collaborative research environments. Interest in developing trustworthy, safe, and reliable AI models in high-stakes scenarios. About OpenAI
OpenAI is committed to ensuring that artificial general intelligence benefits all of humanity. We push AI capabilities safely and seek diverse perspectives to shape the future of AI technology. We are an equal opportunity employer and provide accommodations for applicants with disabilities.
#J-18808-Ljbffr