OpenAI
About the Team
At OpenAI, our Trust, Safety & Risk Operations teams safeguard our products, users, and the company from abuse, fraud, scams, regulatory non-compliance, and other emerging risks. We operate at the intersection of operations, compliance, user trust, and safety working closely with Legal, Policy, Engineering, Product, Go-To-Market, and external partners to ensure our platforms are safe, compliant, and trusted by a diverse, global user base. We support users across ChatGPT, our API, enterprise offerings, and developer tools handling sensitive inbound cases, building detection and enforcement systems, and scaling operational processes to meet the demands of a fast-moving, high-stakes environment. About the Role
We are seeking experienced, senior-level analysts who specialize in one or more of the following areas: Content Integrity & Scaled Enforcement
Detecting, reviewing, and acting on policy violations, harmful content, and emerging abuse patterns at scale. Fraud & Scam Prevention
Investigating and preventing financial fraud, scams, account takeovers, model-enabled deception, and other trust-breaking behaviors. Privacy & Regulatory Compliance
Managing privacy rights requests, intellectual property complaints, audit/regulatory escalations, and ensuring compliance with global frameworks (e.g., GDPR, CCPA, DSA/OSA, EU AI Act). Emerging Risk Operations
Identifying, triaging, and mitigating new and complex safety, policy, or integrity challenges in a rapidly evolving AI landscape. Safety Response Operations - Overseeing vendor escalations, workflows, managing internal escalations, conducting quality reviews, driving operational support, and leading model labeling and training efforts. In this role, you will own high-sensitivity workflows, act as an incident manager for complex cases, and build scalable operational systems; including tooling, automation, and vendor processes that reinforce user safety and trust while meeting our legal, ethical, and product obligations. We use a hybrid work model of 3 days in the San Francisco office per week and offer relocation assistance to new employees. Please note: This role may involve exposure to sensitive content, including material that is sexual, violent, or otherwise disturbing. In This Role, You Will:
Handle and resolve high-priority cases in your area of specialization (scaled content enforcement, fraud/scams, privacy/regulatory, or emerging risks). Perform in-depth risk evaluations and investigations using internal tools, product signals, and third-party data. Act as incident manager for escalations requiring nuanced policy, legal, or regulatory interpretation. Partner with cross-functional teams to design and implement world-class operational workflows, decision trees, and automation strategies. Build feedback loops from casework to inform product, engineering, and policy improvements. Develop and maintain playbooks, SOPs, macros, and knowledge resources for internal teams and vendors. Lead or contribute to cross-functional projects, from zero-to-one process builds to global operational scale-ups. Monitor operational health through case quality audits, SLA adherence, escalation accuracy, and user satisfaction metrics. Train and support vendor teams, ensuring consistent quality and alignment with OpenAI's trust and safety standards. You Might Thrive in This Role If You:
Have 5+ years of experience in one or more of: trust & safety, fraud prevention, scam investigation, privacy/legal operations, compliance, or other risk/integrity domains ideally in a global or high-growth tech environment. Leverage OpenAI technology to enhance workflows, improve decision-making, and scale operational impact. Bring deep domain expertise in your specialization area and familiarity with relevant legal, policy, and technical frameworks. Have a track record of scaling operations, building processes, and working cross-functionally to improve performance and safety outcomes. Possess exceptional analytical skills able to detect patterns, assess risk, and recommend policy or product changes based on evidence. Communicate with clarity, empathy, and precision especially in sensitive user-facing contexts. Thrive in ambiguous, high-autonomy environments and balance speed with diligence. Are comfortable with frequent context switching, managing multiple projects, and prioritizing impact. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
At OpenAI, our Trust, Safety & Risk Operations teams safeguard our products, users, and the company from abuse, fraud, scams, regulatory non-compliance, and other emerging risks. We operate at the intersection of operations, compliance, user trust, and safety working closely with Legal, Policy, Engineering, Product, Go-To-Market, and external partners to ensure our platforms are safe, compliant, and trusted by a diverse, global user base. We support users across ChatGPT, our API, enterprise offerings, and developer tools handling sensitive inbound cases, building detection and enforcement systems, and scaling operational processes to meet the demands of a fast-moving, high-stakes environment. About the Role
We are seeking experienced, senior-level analysts who specialize in one or more of the following areas: Content Integrity & Scaled Enforcement
Detecting, reviewing, and acting on policy violations, harmful content, and emerging abuse patterns at scale. Fraud & Scam Prevention
Investigating and preventing financial fraud, scams, account takeovers, model-enabled deception, and other trust-breaking behaviors. Privacy & Regulatory Compliance
Managing privacy rights requests, intellectual property complaints, audit/regulatory escalations, and ensuring compliance with global frameworks (e.g., GDPR, CCPA, DSA/OSA, EU AI Act). Emerging Risk Operations
Identifying, triaging, and mitigating new and complex safety, policy, or integrity challenges in a rapidly evolving AI landscape. Safety Response Operations - Overseeing vendor escalations, workflows, managing internal escalations, conducting quality reviews, driving operational support, and leading model labeling and training efforts. In this role, you will own high-sensitivity workflows, act as an incident manager for complex cases, and build scalable operational systems; including tooling, automation, and vendor processes that reinforce user safety and trust while meeting our legal, ethical, and product obligations. We use a hybrid work model of 3 days in the San Francisco office per week and offer relocation assistance to new employees. Please note: This role may involve exposure to sensitive content, including material that is sexual, violent, or otherwise disturbing. In This Role, You Will:
Handle and resolve high-priority cases in your area of specialization (scaled content enforcement, fraud/scams, privacy/regulatory, or emerging risks). Perform in-depth risk evaluations and investigations using internal tools, product signals, and third-party data. Act as incident manager for escalations requiring nuanced policy, legal, or regulatory interpretation. Partner with cross-functional teams to design and implement world-class operational workflows, decision trees, and automation strategies. Build feedback loops from casework to inform product, engineering, and policy improvements. Develop and maintain playbooks, SOPs, macros, and knowledge resources for internal teams and vendors. Lead or contribute to cross-functional projects, from zero-to-one process builds to global operational scale-ups. Monitor operational health through case quality audits, SLA adherence, escalation accuracy, and user satisfaction metrics. Train and support vendor teams, ensuring consistent quality and alignment with OpenAI's trust and safety standards. You Might Thrive in This Role If You:
Have 5+ years of experience in one or more of: trust & safety, fraud prevention, scam investigation, privacy/legal operations, compliance, or other risk/integrity domains ideally in a global or high-growth tech environment. Leverage OpenAI technology to enhance workflows, improve decision-making, and scale operational impact. Bring deep domain expertise in your specialization area and familiarity with relevant legal, policy, and technical frameworks. Have a track record of scaling operations, building processes, and working cross-functionally to improve performance and safety outcomes. Possess exceptional analytical skills able to detect patterns, assess risk, and recommend policy or product changes based on evidence. Communicate with clarity, empathy, and precision especially in sensitive user-facing contexts. Thrive in ambiguous, high-autonomy environments and balance speed with diligence. Are comfortable with frequent context switching, managing multiple projects, and prioritizing impact. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.