Liberty Mutual Insurance
GenAI Security Platform Architect
Liberty Mutual Insurance, Columbus, Ohio, United States, 43224
Overview
The Security Architecture & Innovation team within the Global Cybersecurity (GCS) organization is seeking a seasoned
GenAI Security Platform Architect
with expertise in securing AI/ML systems and GenAI applications. The candidate will define and drive the security architecture, controls, and governance for AI platforms, models, and AI-enabled products. This role partners closely with Data Science, Enterprise Data & Analytics Technology, MLOps, Platform/Cloud, Legal/Privacy, and Global Cybersecurity Governance Risk and Compliance to design secure-by-design AI solutions that are resilient to adversarial threats and meet evolving regulatory requirements. Responsibilities
Define and own the end-to-end security architecture for AI/ML systems (training, fine-tuning, inference/serving, RAG, agents, and integrations). Develop and maintain reference architectures and guardrails for common AI patterns (e.g., RAG with vector databases, multi-agent workflows/orchestration, LLM API integrations, on-prem vs. cloud model hosting). Build and maintain an AI security controls library mapped to frameworks (e.g. NIST AI RMF, OWASP Top 10 for LLM Apps, MITRE ATLAS). Establish risk appetite and control requirements across the AI lifecycle; perform design reviews and signoffs for AI initiatives. Define security baselines, secure configurations, and kill-switch/rollback strategies for AI components. Continuously assess threat landscape and update risk models specific to AI/ML, GenAI, and insurance sector adversaries. Secure AI development and MLOps; integrate security into the ML/LLM SDLC and CI/CD pipelines (dataset curation, feature engineering, model training, evaluation, packaging, registry, deployment). Partner across Global Cybersecurity, Global Digital Solutions, and Liberty IT to enforce least privilege, secrets management, and policy-as-code for AI pipelines and serving infrastructure. Champion DevSecOps automation for AI projects by embedding security controls and testing directly into development pipelines. Recommend/consult on adversarial testing, red teaming for AI systems; coordinate jailbreak/prompt-injection testing, model evasion scenarios, and safety evaluations. Recommend and validate defenses (input/output filtering, content moderation, prompt hardening, retrieval sanitization, adversarial training, rate limiting/abuse detection). Drive monitoring for model drift, anomaly detection, and harmful output prevention; develop response playbooks for AI incidents. Ensure data minimization, classification, encryption, and access controls for training and inference data (incl. embeddings and vector stores); ensure compliance with global privacy regulations (CCPA, NYDFS, GDPR, etc.) in AI/ML contexts. Recommend/consult with GRC on AI security governance, policies, and standards; define control objectives and measurable KPIs; support vendor/security assessments for AI services and model providers. Evaluate and select AI security tools; manage POCs and guide build-vs-buy decisions; mentor teams on best practices in AI/ML security and help build internal capability across engineering, risk, and product functions. Qualifications
Bachelor’s degree in Computer Science, Engineering, Information Security, or equivalent experience. Minimum 8+ years in Cybersecurity with 3+ years focused on securing AI/ML systems or GenAI applications in production. CISSP certification required. Demonstrated deep technical experience designing secure architectures for ML pipelines and MLOps platforms (data ingestion, feature stores, training, model registry, deployment, monitoring). GenAI workloads (LLM APIs, fine-tuning, RAG, vector databases, agent frameworks). Cloud-native environments (containers/Kubernetes, serverless, service mesh, VPC/network security). Strong knowledge of AI-specific threats and mitigations: data poisoning, model inversion/membership inference, model theft/IP protection, adversarial examples, prompt injection/jailbreaks, exfiltration via outputs, and LLM supply chain risks. Hands-on experience with security frameworks and standards: NIST AI RMF, OWASP Top 10 (including LLM apps), MITRE ATT&CK and ATLAS, or similar. Experience implementing identity and access controls for AI services, secrets management, encryption, and monitoring/logging for AI systems. Strong communication skills; ability to influence architecture and risk decisions across engineering, product, and executive stakeholders; ability to collaborate with diverse teams. Preferred Qualifications
Experience with enterprise GenAI platforms and tools (MLOps, LLM/GenAI, observability, AI eval frameworks, red-team tooling). Advanced degree (MS/PhD) in Security, ML/AI, or related field is a plus. How We Work
Partner-first: embed with Data Science, MLOps, and Product teams to enable speed with safety. Automate-by-default: codify controls in pipelines and platforms rather than relying on manual gates. Measurable risk management: define clear control objectives, metrics, and continuous improvement loops. Pay and Benefits
Base pay range: $175,000.00/yr - $315,000.00/yr. This range is provided by Liberty Mutual Insurance; your actual pay will be based on your skills and experience. Talk with your recruiter to learn more. Pay Philosophy: The typical starting salary range for this role is determined by a number of factors including skills, experience, education, certifications and location. The full salary range for this role reflects the competitive labor market value for all employees in these positions across the national market and provides an opportunity to progress as employees grow and develop within the role. Some roles may include commission and/or bonus earnings as described in the compensation plan for the role. About Us
Liberty Mutual is an equal opportunity employer. We will not tolerate discrimination on the basis of race, color, national origin, sex, sexual orientation, gender identity, religion, age, disability, veteran status, pregnancy, genetic information or any other basis prohibited by law. We are committed to fostering an inclusive environment with benefits and continuous learning opportunities. Equal Opportunity Notices
Fair Chance Notices: California Los Angeles Incorporated Los Angeles Unincorporated Philadelphia San Francisco
#J-18808-Ljbffr
The Security Architecture & Innovation team within the Global Cybersecurity (GCS) organization is seeking a seasoned
GenAI Security Platform Architect
with expertise in securing AI/ML systems and GenAI applications. The candidate will define and drive the security architecture, controls, and governance for AI platforms, models, and AI-enabled products. This role partners closely with Data Science, Enterprise Data & Analytics Technology, MLOps, Platform/Cloud, Legal/Privacy, and Global Cybersecurity Governance Risk and Compliance to design secure-by-design AI solutions that are resilient to adversarial threats and meet evolving regulatory requirements. Responsibilities
Define and own the end-to-end security architecture for AI/ML systems (training, fine-tuning, inference/serving, RAG, agents, and integrations). Develop and maintain reference architectures and guardrails for common AI patterns (e.g., RAG with vector databases, multi-agent workflows/orchestration, LLM API integrations, on-prem vs. cloud model hosting). Build and maintain an AI security controls library mapped to frameworks (e.g. NIST AI RMF, OWASP Top 10 for LLM Apps, MITRE ATLAS). Establish risk appetite and control requirements across the AI lifecycle; perform design reviews and signoffs for AI initiatives. Define security baselines, secure configurations, and kill-switch/rollback strategies for AI components. Continuously assess threat landscape and update risk models specific to AI/ML, GenAI, and insurance sector adversaries. Secure AI development and MLOps; integrate security into the ML/LLM SDLC and CI/CD pipelines (dataset curation, feature engineering, model training, evaluation, packaging, registry, deployment). Partner across Global Cybersecurity, Global Digital Solutions, and Liberty IT to enforce least privilege, secrets management, and policy-as-code for AI pipelines and serving infrastructure. Champion DevSecOps automation for AI projects by embedding security controls and testing directly into development pipelines. Recommend/consult on adversarial testing, red teaming for AI systems; coordinate jailbreak/prompt-injection testing, model evasion scenarios, and safety evaluations. Recommend and validate defenses (input/output filtering, content moderation, prompt hardening, retrieval sanitization, adversarial training, rate limiting/abuse detection). Drive monitoring for model drift, anomaly detection, and harmful output prevention; develop response playbooks for AI incidents. Ensure data minimization, classification, encryption, and access controls for training and inference data (incl. embeddings and vector stores); ensure compliance with global privacy regulations (CCPA, NYDFS, GDPR, etc.) in AI/ML contexts. Recommend/consult with GRC on AI security governance, policies, and standards; define control objectives and measurable KPIs; support vendor/security assessments for AI services and model providers. Evaluate and select AI security tools; manage POCs and guide build-vs-buy decisions; mentor teams on best practices in AI/ML security and help build internal capability across engineering, risk, and product functions. Qualifications
Bachelor’s degree in Computer Science, Engineering, Information Security, or equivalent experience. Minimum 8+ years in Cybersecurity with 3+ years focused on securing AI/ML systems or GenAI applications in production. CISSP certification required. Demonstrated deep technical experience designing secure architectures for ML pipelines and MLOps platforms (data ingestion, feature stores, training, model registry, deployment, monitoring). GenAI workloads (LLM APIs, fine-tuning, RAG, vector databases, agent frameworks). Cloud-native environments (containers/Kubernetes, serverless, service mesh, VPC/network security). Strong knowledge of AI-specific threats and mitigations: data poisoning, model inversion/membership inference, model theft/IP protection, adversarial examples, prompt injection/jailbreaks, exfiltration via outputs, and LLM supply chain risks. Hands-on experience with security frameworks and standards: NIST AI RMF, OWASP Top 10 (including LLM apps), MITRE ATT&CK and ATLAS, or similar. Experience implementing identity and access controls for AI services, secrets management, encryption, and monitoring/logging for AI systems. Strong communication skills; ability to influence architecture and risk decisions across engineering, product, and executive stakeholders; ability to collaborate with diverse teams. Preferred Qualifications
Experience with enterprise GenAI platforms and tools (MLOps, LLM/GenAI, observability, AI eval frameworks, red-team tooling). Advanced degree (MS/PhD) in Security, ML/AI, or related field is a plus. How We Work
Partner-first: embed with Data Science, MLOps, and Product teams to enable speed with safety. Automate-by-default: codify controls in pipelines and platforms rather than relying on manual gates. Measurable risk management: define clear control objectives, metrics, and continuous improvement loops. Pay and Benefits
Base pay range: $175,000.00/yr - $315,000.00/yr. This range is provided by Liberty Mutual Insurance; your actual pay will be based on your skills and experience. Talk with your recruiter to learn more. Pay Philosophy: The typical starting salary range for this role is determined by a number of factors including skills, experience, education, certifications and location. The full salary range for this role reflects the competitive labor market value for all employees in these positions across the national market and provides an opportunity to progress as employees grow and develop within the role. Some roles may include commission and/or bonus earnings as described in the compensation plan for the role. About Us
Liberty Mutual is an equal opportunity employer. We will not tolerate discrimination on the basis of race, color, national origin, sex, sexual orientation, gender identity, religion, age, disability, veteran status, pregnancy, genetic information or any other basis prohibited by law. We are committed to fostering an inclusive environment with benefits and continuous learning opportunities. Equal Opportunity Notices
Fair Chance Notices: California Los Angeles Incorporated Los Angeles Unincorporated Philadelphia San Francisco
#J-18808-Ljbffr