OpenAI
AI Social Risk Analyst
OpenAI is looking for an AI Social Risk Analyst to support safety and abuse risk analysis for AI‑enabled social experiences.
About the Team The Intelligence and Investigations team rapidly identifies and mitigates abuse and strategic risks to ensure a safe online ecosystem. We focus on emerging abuse trends, analyze risks, and partner with internal and external stakeholders to implement effective mitigation strategies. Our work contributes to OpenAI’s overarching goal of developing AI that benefits humanity.
About the Role As an AI social risk analyst, you will sit at the frontline of new AI‑enabled social challenges. You will own the analytical view of safety and abuse risks in AI‑social environments—Sora content and sharing, group chats, messaging, and AI‑assisted brand and creator experiences. You will spot early warning signs, delve into concerning behaviors, and translate weak signals into clear, prioritized risk calls that guide mitigations and keep users, brands, and communities safe while enabling productive, creative use of these tools.
Responsibilities
Map and prioritize the AI‑social risk landscape
Build and continuously refine a clear picture of how AI is used in social‑like products (e.g., Sora‑powered clips, group chats, messaging assistants, creator tools).
Design and maintain harm taxonomies tailored to AI‑mediated communication (e.g., synthetic harassment, coordinated AI‑assisted brigading, synthetic identity/brand misuse, reputational and intimate harms).
Maintain a risk register and prioritization framework that surfaces the top issues by severity, prevalence, exposure, and trajectory.
Detect and deep dive into emerging abuse patterns
Partner with investigations, operations, and product teams to surface new patterns of misuse across Sora, chats, and partner integrations.
Run structured deep dives on incidents—from synthetic impersonation and scams to targeted harassment or coordinated influence using AI‑generated media.
Connect individual incidents into system‑level stories about actors, incentives, product design weaknesses, and cross‑product spillover.
Turn analysis into actionable risk intelligence
Translate findings into clear, ranked risk lists and concrete proposals for mitigations that product, safety, and policy teams can execute on.
Collaborate with Safety Systems, Integrity, and Product to scope solutions such as classification improvements, UX guardrails, friction, enforcement flows, and detection signals.
Track whether mitigation work is landing: follow key indicators, pressure‑test assumptions, and push for course‑corrections when the data demands it.
Build early warning and measurement capabilities
Help define the core metrics and signals that indicate whether AI‑social environments are safe (e.g., key harm prevalence, severity distributions, escalation rates, brand safety issues).
Work with data science and visualization colleagues to shape monitoring views and dashboards that highlight leading indicators and unusual changes in user behavior or abuse patterns.
Propose targeted probes, structured reviews, and experiments that surface new risk modes around major launches and feature changes.
Provide strategic analysis and future‑looking perspectives
Produce concise, decision‑ready briefs on AI‑social risks for leadership, safety forums, and partner teams.
Run scenario analyses that explore how AI‑social harms might evolve over the next 6–24 months (e.g., how attackers might adapt to Sora, how group chats could be used for coordination, likely pressure points for brands and public figures).
Benchmark OpenAI’s AI‑social risk profile and mitigations against external incidents and other platforms, highlighting gaps, strengths, and opportunities.
Shape safety readiness for social‑like AI products
Contribute to product readiness and launch reviews by laying out expected abuse modes, risk tradeoffs, and monitoring/response plans.
Turn risk insights into practical guidance for internal teams (product, marketing, partnerships, communications) and, where appropriate, external partners using OpenAI technologies in social and brand contexts.
Develop reusable frameworks, playbooks, FAQs, and briefing materials that make it easier for the broader organization to understand AI‑social risks and respond consistently.
Qualifications & Skills
Significant experience (typically 5+ years) in trust and safety, integrity, security, policy analysis, or intelligence work focused on social media, messaging, online communities, or adjacent environments.
Demonstrated ability to analyze complex online harms (e.g., harassment, coordinated abuse, scams, synthetic media, influence operations, brand safety issues) and convert analysis into concrete, prioritized recommendations.
Strong analytical skills and comfort working with both qualitative and quantitative inputs—including casework, incident reports, OSINT, product context, policy frameworks, and basic metrics in partnership with data science.
Strong adversarial and product intuition, able to foresee how actors might adapt AI‑social and creative tools for misuse and evaluate how product mechanics, incentives, and UX decisions influence risk.
Experience designing and using risk frameworks and taxonomies (e.g., harm classification schemes, severity/likelihood matrices, prioritization models) to structure ambiguous spaces and support decision‑making.
Proven ability to work cross‑functionally with product, engineering, data science, operations, legal, and policy teams, including pushing for clarity on tradeoffs and following through on mitigation work.
Excellent written and verbal communication skills, including experience producing concise, executive‑ready briefs and explaining sensitive, complex issues in grounded, concrete terms.
Comfort operating in fast‑changing, ambiguous environments: you can identify weak signals, form hypotheses, test them quickly, and adjust as the product and threat landscape evolves.
Additional Information Compensation Range: $220K – $320K. OpenAI is an equal‑opportunity employer; we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristics. For additional information, see OpenAI’s
Affirmative Action and Equal Employment Opportunity Policy Statement . Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers, criminal history may impact certain job duties. We are committed to providing reasonable accommodations to applicants with disabilities; requests can be made via the provided link. OpenAI Global Applicant Privacy Policy applies.
#J-18808-Ljbffr
About the Team The Intelligence and Investigations team rapidly identifies and mitigates abuse and strategic risks to ensure a safe online ecosystem. We focus on emerging abuse trends, analyze risks, and partner with internal and external stakeholders to implement effective mitigation strategies. Our work contributes to OpenAI’s overarching goal of developing AI that benefits humanity.
About the Role As an AI social risk analyst, you will sit at the frontline of new AI‑enabled social challenges. You will own the analytical view of safety and abuse risks in AI‑social environments—Sora content and sharing, group chats, messaging, and AI‑assisted brand and creator experiences. You will spot early warning signs, delve into concerning behaviors, and translate weak signals into clear, prioritized risk calls that guide mitigations and keep users, brands, and communities safe while enabling productive, creative use of these tools.
Responsibilities
Map and prioritize the AI‑social risk landscape
Build and continuously refine a clear picture of how AI is used in social‑like products (e.g., Sora‑powered clips, group chats, messaging assistants, creator tools).
Design and maintain harm taxonomies tailored to AI‑mediated communication (e.g., synthetic harassment, coordinated AI‑assisted brigading, synthetic identity/brand misuse, reputational and intimate harms).
Maintain a risk register and prioritization framework that surfaces the top issues by severity, prevalence, exposure, and trajectory.
Detect and deep dive into emerging abuse patterns
Partner with investigations, operations, and product teams to surface new patterns of misuse across Sora, chats, and partner integrations.
Run structured deep dives on incidents—from synthetic impersonation and scams to targeted harassment or coordinated influence using AI‑generated media.
Connect individual incidents into system‑level stories about actors, incentives, product design weaknesses, and cross‑product spillover.
Turn analysis into actionable risk intelligence
Translate findings into clear, ranked risk lists and concrete proposals for mitigations that product, safety, and policy teams can execute on.
Collaborate with Safety Systems, Integrity, and Product to scope solutions such as classification improvements, UX guardrails, friction, enforcement flows, and detection signals.
Track whether mitigation work is landing: follow key indicators, pressure‑test assumptions, and push for course‑corrections when the data demands it.
Build early warning and measurement capabilities
Help define the core metrics and signals that indicate whether AI‑social environments are safe (e.g., key harm prevalence, severity distributions, escalation rates, brand safety issues).
Work with data science and visualization colleagues to shape monitoring views and dashboards that highlight leading indicators and unusual changes in user behavior or abuse patterns.
Propose targeted probes, structured reviews, and experiments that surface new risk modes around major launches and feature changes.
Provide strategic analysis and future‑looking perspectives
Produce concise, decision‑ready briefs on AI‑social risks for leadership, safety forums, and partner teams.
Run scenario analyses that explore how AI‑social harms might evolve over the next 6–24 months (e.g., how attackers might adapt to Sora, how group chats could be used for coordination, likely pressure points for brands and public figures).
Benchmark OpenAI’s AI‑social risk profile and mitigations against external incidents and other platforms, highlighting gaps, strengths, and opportunities.
Shape safety readiness for social‑like AI products
Contribute to product readiness and launch reviews by laying out expected abuse modes, risk tradeoffs, and monitoring/response plans.
Turn risk insights into practical guidance for internal teams (product, marketing, partnerships, communications) and, where appropriate, external partners using OpenAI technologies in social and brand contexts.
Develop reusable frameworks, playbooks, FAQs, and briefing materials that make it easier for the broader organization to understand AI‑social risks and respond consistently.
Qualifications & Skills
Significant experience (typically 5+ years) in trust and safety, integrity, security, policy analysis, or intelligence work focused on social media, messaging, online communities, or adjacent environments.
Demonstrated ability to analyze complex online harms (e.g., harassment, coordinated abuse, scams, synthetic media, influence operations, brand safety issues) and convert analysis into concrete, prioritized recommendations.
Strong analytical skills and comfort working with both qualitative and quantitative inputs—including casework, incident reports, OSINT, product context, policy frameworks, and basic metrics in partnership with data science.
Strong adversarial and product intuition, able to foresee how actors might adapt AI‑social and creative tools for misuse and evaluate how product mechanics, incentives, and UX decisions influence risk.
Experience designing and using risk frameworks and taxonomies (e.g., harm classification schemes, severity/likelihood matrices, prioritization models) to structure ambiguous spaces and support decision‑making.
Proven ability to work cross‑functionally with product, engineering, data science, operations, legal, and policy teams, including pushing for clarity on tradeoffs and following through on mitigation work.
Excellent written and verbal communication skills, including experience producing concise, executive‑ready briefs and explaining sensitive, complex issues in grounded, concrete terms.
Comfort operating in fast‑changing, ambiguous environments: you can identify weak signals, form hypotheses, test them quickly, and adjust as the product and threat landscape evolves.
Additional Information Compensation Range: $220K – $320K. OpenAI is an equal‑opportunity employer; we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristics. For additional information, see OpenAI’s
Affirmative Action and Equal Employment Opportunity Policy Statement . Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers, criminal history may impact certain job duties. We are committed to providing reasonable accommodations to applicants with disabilities; requests can be made via the provided link. OpenAI Global Applicant Privacy Policy applies.
#J-18808-Ljbffr