OpenAI
Product Policy - National Security | OpenAI
Careers
Product Policy - National Security Product Policy - Washington, DC
About the Team
The Product Policy team is responsible for the development, implementation, enforcement, and communication of the policies that govern use of OpenAI’s services, including ChatGPT, GPTs, the GPT store, Sora, and the OpenAI API. As a member of this team, you will be instrumental in developing policy approaches to best enable both innovative and responsible use of AI so that our groundbreaking technologies are truly used to benefit all people.
About the Role
As a member of the Product Policy team, you will work at the intersection of AI capabilities, national security use cases, and risk governance. You will contribute to the development and refinement of OpenAI’s national security usage policies, support operational decision-making on sensitive use cases, and partner closely with technical, legal, investigations, and OpenAI for Government teams to ensure responsible deployment of our systems in high-risk contexts.
This role will own significant parts of the core work, operating with autonomy while partnering closely with the manager for strategic direction and prioritization. Much of the foundational infrastructure for this work is already in place; we are seeking a strong executor with relevant domain expertise who can take ownership of this portfolio and drive it forward as the scope and complexity of national security engagement continues to grow.
This role is based in Washington, D.C. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you will:
Contribute to the ongoing development, refinement, and communication of OpenAI’s approach to governing national security use of OpenAI technology.
Support operational reviews of high-risk use cases, including preparing clear, well-structured decision briefs for internal stakeholders on complex or sensitive matters.
Partner closely with cross-functional teams to design and support risk assessments, evaluations, and other mechanisms used to assess fitness-for-purpose of AI capabilities in national security contexts.
Drive national-security deployments through required safety, risk, and governance processes.
Collaborate with investigations and detection teams to define and operationalize principles for identifying, prioritizing, and responding to unauthorized or adversarial national-security use of OpenAI technology.
Engage directly with government stakeholders on AI safety and governance, helping both OpenAI and government partners understand emerging risks and responsible deployment.
You might thrive in this role if you:
Have 4+ years of experience working in national security, defense, intelligence, or adjacent high-risk policy environments, with demonstrated experience with advanced or emerging technologies (including AI-enabled systems).
Have experience developing and/or operationalizing risk assessments and policies in partnership with technical and legal teams.
Demonstrate the ability to drive alignment across diverse internal and external stakeholders, navigating substantive differences in perspective while balancing principled risk management with pragmatic decision‑making.
Communicate and engage product managers, engineers, researchers, lawyers, and executives with clarity and credibility.
Demonstrate the ability to adapt quickly to new problem spaces and evolving priorities, learning unfamiliar domains as needed and contributing effectively across adjacent work streams as organizational needs shift.
You could be an especially great fit if you are:
Eligible for a U.S. security clearance (or equivalent clearance in another NATO country).
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Compensation
$280K + Offers Equity
#J-18808-Ljbffr
Careers
Product Policy - National Security Product Policy - Washington, DC
About the Team
The Product Policy team is responsible for the development, implementation, enforcement, and communication of the policies that govern use of OpenAI’s services, including ChatGPT, GPTs, the GPT store, Sora, and the OpenAI API. As a member of this team, you will be instrumental in developing policy approaches to best enable both innovative and responsible use of AI so that our groundbreaking technologies are truly used to benefit all people.
About the Role
As a member of the Product Policy team, you will work at the intersection of AI capabilities, national security use cases, and risk governance. You will contribute to the development and refinement of OpenAI’s national security usage policies, support operational decision-making on sensitive use cases, and partner closely with technical, legal, investigations, and OpenAI for Government teams to ensure responsible deployment of our systems in high-risk contexts.
This role will own significant parts of the core work, operating with autonomy while partnering closely with the manager for strategic direction and prioritization. Much of the foundational infrastructure for this work is already in place; we are seeking a strong executor with relevant domain expertise who can take ownership of this portfolio and drive it forward as the scope and complexity of national security engagement continues to grow.
This role is based in Washington, D.C. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you will:
Contribute to the ongoing development, refinement, and communication of OpenAI’s approach to governing national security use of OpenAI technology.
Support operational reviews of high-risk use cases, including preparing clear, well-structured decision briefs for internal stakeholders on complex or sensitive matters.
Partner closely with cross-functional teams to design and support risk assessments, evaluations, and other mechanisms used to assess fitness-for-purpose of AI capabilities in national security contexts.
Drive national-security deployments through required safety, risk, and governance processes.
Collaborate with investigations and detection teams to define and operationalize principles for identifying, prioritizing, and responding to unauthorized or adversarial national-security use of OpenAI technology.
Engage directly with government stakeholders on AI safety and governance, helping both OpenAI and government partners understand emerging risks and responsible deployment.
You might thrive in this role if you:
Have 4+ years of experience working in national security, defense, intelligence, or adjacent high-risk policy environments, with demonstrated experience with advanced or emerging technologies (including AI-enabled systems).
Have experience developing and/or operationalizing risk assessments and policies in partnership with technical and legal teams.
Demonstrate the ability to drive alignment across diverse internal and external stakeholders, navigating substantive differences in perspective while balancing principled risk management with pragmatic decision‑making.
Communicate and engage product managers, engineers, researchers, lawyers, and executives with clarity and credibility.
Demonstrate the ability to adapt quickly to new problem spaces and evolving priorities, learning unfamiliar domains as needed and contributing effectively across adjacent work streams as organizational needs shift.
You could be an especially great fit if you are:
Eligible for a U.S. security clearance (or equivalent clearance in another NATO country).
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Compensation
$280K + Offers Equity
#J-18808-Ljbffr