OpenAI
Quantitative Threat Forecasting Analyst
OpenAI, San Francisco, California, United States, 94102
Quantitative Threat Forecasting Analyst
The Intelligence and Investigations team seeks to rapidly identify and mitigate abuse and strategic risks to ensure a safe online ecosystem. We are dedicated to identifying emerging abuse trends, analyzing risks, and working with our internal and external partners to implement effective mitigation strategies to protect against misuse. Our efforts contribute to
OpenAI 's overarching goal of developing AI that benefits humanity. We're looking for a world-class quantitative analyst to build the predictive backbone of this missionsomeone who thrives on modeling ambiguity, forecasting high-stakes outcomes, and translating messy, sparse, or fast-moving data into decision-ready insight. As a Quantitative Threat Forecasting Analyst, you'll design and deploy statistical models that forecast threat emergence, detect anomalies, and quantify riskoften when signal is weak, timelines are short, and the stakes are high. Your work will power both tactical responses to abuse and strategic decisions about how we evolve our safety detection, investigation, and analysis systems. This is a rare opportunity to apply advanced statistical modeling, risk analytics, and real-world inference to one of the most consequential safety challenges of our time. This role is based in San Francisco, CA. We use a hybrid work model of three days in the office per week and offer relocation assistance to new employees. In this role, you will: Design probabilistic & Bayesian models using PyMC, NumPyro (JAX-accelerated HMC/NUTS) and TensorFlow Probability to capture uncertainty at scale. Build classical and deep-learning forecasts with statsmodels baselines, plus state-of-the-art libraries like Darts, GluonTS, Chronos, sktime and Nixtla's MLForecast for multivariate or long-horizon time-series problems. Develop real-time anomaly-detection pipelines leveraging PyOD 2.0 for GPU-ready detectors and River for streaming/online ML on telemetry data. Apply survival-analysis and rare-event methods (e.g., Cox PH, random-survival-forests, DeepSurv) via scikit-survival to model threat lifecycles and hazard rates. Run stress tests & Monte Carlo simulations to evaluate the likelihood and impact of low-frequency, high-severity threats; translate findings into resilient safety-engineering requirements. Collaborate across disciplinesinvestigations, engineering, policyto embed statistical rigor into threat prioritization, guardrails, and product decisions. Communicate insights through clear briefs, dashboards, and visualizations that drive executive action. Own production pipelines in Python/JAX/PyTorch or R, using SQL or Spark-like engines (DuckDB, BigQuery, Snowflake) and GPU/TPU acceleration where appropriate. You might thrive in this role if you: Have five or more years of experience in a quantitative research, forecasting, or risk modeling role in finance, tech, safety, security, or public policy. Have deep fluency in statistical inference, forecasting, uncertainty quantification, and decision modelingespecially under sparse or adversarial data conditions. Have demonstrated impact: you've shipped models that directly informed capital allocation, fraud prevention, incident response, or safety interventions. Have expertise with modern toolchainsNumPyro, TensorFlow Probability, PyMC, Darts, GluonTS/Chronos, sktime, PyOD 2.0, River, scikit-survivaland readiness to evaluate emerging libraries as the field evolves. Have strong coding skills (Python/JAX/PyTorch or R) and data-engineering fundamentals (SQL, Spark, data warehousing). Are a crisp communicator able to influence multidisciplinary partners and executives. Are comfortable navigating imperfect data and prioritizing under uncertainty in a rapidly changing threat landscape. OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. At
OpenAI , we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
The Intelligence and Investigations team seeks to rapidly identify and mitigate abuse and strategic risks to ensure a safe online ecosystem. We are dedicated to identifying emerging abuse trends, analyzing risks, and working with our internal and external partners to implement effective mitigation strategies to protect against misuse. Our efforts contribute to
OpenAI 's overarching goal of developing AI that benefits humanity. We're looking for a world-class quantitative analyst to build the predictive backbone of this missionsomeone who thrives on modeling ambiguity, forecasting high-stakes outcomes, and translating messy, sparse, or fast-moving data into decision-ready insight. As a Quantitative Threat Forecasting Analyst, you'll design and deploy statistical models that forecast threat emergence, detect anomalies, and quantify riskoften when signal is weak, timelines are short, and the stakes are high. Your work will power both tactical responses to abuse and strategic decisions about how we evolve our safety detection, investigation, and analysis systems. This is a rare opportunity to apply advanced statistical modeling, risk analytics, and real-world inference to one of the most consequential safety challenges of our time. This role is based in San Francisco, CA. We use a hybrid work model of three days in the office per week and offer relocation assistance to new employees. In this role, you will: Design probabilistic & Bayesian models using PyMC, NumPyro (JAX-accelerated HMC/NUTS) and TensorFlow Probability to capture uncertainty at scale. Build classical and deep-learning forecasts with statsmodels baselines, plus state-of-the-art libraries like Darts, GluonTS, Chronos, sktime and Nixtla's MLForecast for multivariate or long-horizon time-series problems. Develop real-time anomaly-detection pipelines leveraging PyOD 2.0 for GPU-ready detectors and River for streaming/online ML on telemetry data. Apply survival-analysis and rare-event methods (e.g., Cox PH, random-survival-forests, DeepSurv) via scikit-survival to model threat lifecycles and hazard rates. Run stress tests & Monte Carlo simulations to evaluate the likelihood and impact of low-frequency, high-severity threats; translate findings into resilient safety-engineering requirements. Collaborate across disciplinesinvestigations, engineering, policyto embed statistical rigor into threat prioritization, guardrails, and product decisions. Communicate insights through clear briefs, dashboards, and visualizations that drive executive action. Own production pipelines in Python/JAX/PyTorch or R, using SQL or Spark-like engines (DuckDB, BigQuery, Snowflake) and GPU/TPU acceleration where appropriate. You might thrive in this role if you: Have five or more years of experience in a quantitative research, forecasting, or risk modeling role in finance, tech, safety, security, or public policy. Have deep fluency in statistical inference, forecasting, uncertainty quantification, and decision modelingespecially under sparse or adversarial data conditions. Have demonstrated impact: you've shipped models that directly informed capital allocation, fraud prevention, incident response, or safety interventions. Have expertise with modern toolchainsNumPyro, TensorFlow Probability, PyMC, Darts, GluonTS/Chronos, sktime, PyOD 2.0, River, scikit-survivaland readiness to evaluate emerging libraries as the field evolves. Have strong coding skills (Python/JAX/PyTorch or R) and data-engineering fundamentals (SQL, Spark, data warehousing). Are a crisp communicator able to influence multidisciplinary partners and executives. Are comfortable navigating imperfect data and prioritizing under uncertainty in a rapidly changing threat landscape. OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. At
OpenAI , we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.