REVOLUTION Medicines
Director of AI Security
Revolution Medicines is a clinical-stage precision oncology company focused on developing novel targeted therapies to inhibit frontier targets in RAS-addicted cancers. As a new member of the Revolution Medicines team, you will join other outstanding Revolutionaries in a tireless commitment to patients with cancers harboring mutations in the RAS signaling pathway. We are seeking a seasoned cybersecurity leader to serve as the Director of AI Security. This individual will be responsible for developing and executing the enterprise-wide strategy to secure artificial intelligence (AI) and machine learning (ML) systems. Reporting to the IS Security, Risk, and Compliance VP, the Director will work cross-functionally with data science, engineering, compliance, and legal teams to ensure that AI technologies are deployed securely, ethically, and in alignment with regulatory and organizational standards. This role presents an exciting opportunity to shape the future of secure AI adoption across the enterprise. The Director will lead efforts to identify and mitigate AI-specific risks, establish governance frameworks, and embed security into the AI/ML lifecyclefrom data ingestion and model training to deployment and monitoring. In this role you will call on your skills to: Stakeholder Engagement: Build strong partnerships with AI/ML, data science, and engineering teams to understand their workflows and identify security needs and opportunities. Strategic Planning: Contribute to the development of, define and drive the AI security strategy, policies, and governance frameworks that align with enterprise risk management and responsible AI principles. Risk Management: Lead threat modeling, risk assessments, and security reviews for AI/ML systems, including model integrity, data privacy, and adversarial robustness. Project Implementation: Oversee the integration of security controls into AI platforms and MLOps pipelines. Ensure secure development lifecycle practices are followed. Incident Response: Develop and maintain incident response plans for AI-related threats. Lead investigations and remediation efforts for AI system breaches or anomalies. Compliance & Ethics: Collaborate with legal and compliance teams to ensure AI systems meet regulatory requirements (e.g., NIST AI RMF, EU AI Act) and ethical standards. Performance Monitoring: Track and report on AI security metrics and posture. Recommend improvements based on threat intelligence and emerging trends. Training & Awareness: Lead internal education efforts to promote AI security best practices across technical and non-technical teams. Innovation & Research: Stay informed on the evolving threat landscape, emerging technologies, and academic research related to AI security. Generative AI (GenAI): Develop and implement security strategies for GenAI models and applications, ensuring responsible use and protection against emerging threats. Required Skills, Experience and Education: Bachelor's degree or equivalent and a minimum of 15+ years in cybersecurity, 8+ years of leadership with at least 5 years focused on AI/ML security or adjacent domains. Deep understanding of AI/ML architectures, data science workflows, and associated security risks. Experience with adversarial machine learning, model inversion, data poisoning, and other AI-specific attack techniques. Familiarity with cloud-native AI platforms (e.g., Azure ML, AWS SageMaker, Google Vertex AI). Proven track record of leading cross-functional teams and driving security initiatives in complex environments. Strong knowledge of regulatory frameworks and standards related to AI and data privacy. Highly organized, with strong attention to detail and accuracy. Excellent communication, presentation, and stakeholder engagement skills. Ability to manage multiple projects and priorities autonomously. Preferred Skills: Team leadership experience preferred. Master's degree or equivalent in Cybersecurity, Computer Science, Data Science, or a related field. Experience implementing secure MLOps pipelines and AI governance frameworks. Familiarity with AI assurance tools and techniques (e.g., explainability, fairness, bias detection). Certifications such as CISSP, CCSP, or emerging credentials in AI security (e.g., Certified AI Security Specialist). Experience working in regulated industries such as healthcare, finance, or life sciences. Participation in AI security research communities or standards bodies is a plus. Experience with enterprise risk management and third-party AI risk assessments.
Revolution Medicines is a clinical-stage precision oncology company focused on developing novel targeted therapies to inhibit frontier targets in RAS-addicted cancers. As a new member of the Revolution Medicines team, you will join other outstanding Revolutionaries in a tireless commitment to patients with cancers harboring mutations in the RAS signaling pathway. We are seeking a seasoned cybersecurity leader to serve as the Director of AI Security. This individual will be responsible for developing and executing the enterprise-wide strategy to secure artificial intelligence (AI) and machine learning (ML) systems. Reporting to the IS Security, Risk, and Compliance VP, the Director will work cross-functionally with data science, engineering, compliance, and legal teams to ensure that AI technologies are deployed securely, ethically, and in alignment with regulatory and organizational standards. This role presents an exciting opportunity to shape the future of secure AI adoption across the enterprise. The Director will lead efforts to identify and mitigate AI-specific risks, establish governance frameworks, and embed security into the AI/ML lifecyclefrom data ingestion and model training to deployment and monitoring. In this role you will call on your skills to: Stakeholder Engagement: Build strong partnerships with AI/ML, data science, and engineering teams to understand their workflows and identify security needs and opportunities. Strategic Planning: Contribute to the development of, define and drive the AI security strategy, policies, and governance frameworks that align with enterprise risk management and responsible AI principles. Risk Management: Lead threat modeling, risk assessments, and security reviews for AI/ML systems, including model integrity, data privacy, and adversarial robustness. Project Implementation: Oversee the integration of security controls into AI platforms and MLOps pipelines. Ensure secure development lifecycle practices are followed. Incident Response: Develop and maintain incident response plans for AI-related threats. Lead investigations and remediation efforts for AI system breaches or anomalies. Compliance & Ethics: Collaborate with legal and compliance teams to ensure AI systems meet regulatory requirements (e.g., NIST AI RMF, EU AI Act) and ethical standards. Performance Monitoring: Track and report on AI security metrics and posture. Recommend improvements based on threat intelligence and emerging trends. Training & Awareness: Lead internal education efforts to promote AI security best practices across technical and non-technical teams. Innovation & Research: Stay informed on the evolving threat landscape, emerging technologies, and academic research related to AI security. Generative AI (GenAI): Develop and implement security strategies for GenAI models and applications, ensuring responsible use and protection against emerging threats. Required Skills, Experience and Education: Bachelor's degree or equivalent and a minimum of 15+ years in cybersecurity, 8+ years of leadership with at least 5 years focused on AI/ML security or adjacent domains. Deep understanding of AI/ML architectures, data science workflows, and associated security risks. Experience with adversarial machine learning, model inversion, data poisoning, and other AI-specific attack techniques. Familiarity with cloud-native AI platforms (e.g., Azure ML, AWS SageMaker, Google Vertex AI). Proven track record of leading cross-functional teams and driving security initiatives in complex environments. Strong knowledge of regulatory frameworks and standards related to AI and data privacy. Highly organized, with strong attention to detail and accuracy. Excellent communication, presentation, and stakeholder engagement skills. Ability to manage multiple projects and priorities autonomously. Preferred Skills: Team leadership experience preferred. Master's degree or equivalent in Cybersecurity, Computer Science, Data Science, or a related field. Experience implementing secure MLOps pipelines and AI governance frameworks. Familiarity with AI assurance tools and techniques (e.g., explainability, fairness, bias detection). Certifications such as CISSP, CCSP, or emerging credentials in AI security (e.g., Certified AI Security Specialist). Experience working in regulated industries such as healthcare, finance, or life sciences. Participation in AI security research communities or standards bodies is a plus. Experience with enterprise risk management and third-party AI risk assessments.