Alvarez & Marsal
Manager, AI Security & Compliance - Cybersecurity Governance
Alvarez & Marsal, San Francisco, California, United States, 94199
Overview
Manager, AI Security & Compliance - Cybersecurity Governance Join to apply for the
Manager, AI Security & Compliance - Cybersecurity Governance
role at
Alvarez & Marsal 4 days ago Be among the first 25 applicants About Alvarez & Marsal
Alvarez & Marsal (A&M) is a global consulting firm with over 10,000 professionals in over 40 countries. We solve client problems hands-on and value independent thinkers and doers who positively impact our clients and industry. Our culture emphasizes Integrity, Quality, Objectivity, Fun, Personal Reward, and Inclusive Diversity.
What you will contribute
With the rapid adoption of AI technologies and evolving regulatory landscape, demand for AI-focused security analysis and compliance expertise is growing. Our team supports organizations, investors and counsel in identifying, assessing, and mitigating risks associated with AI system deployment, algorithmic bias, data privacy, and model security. We focus on implementing secure AI/ML pipelines, establishing AI governance frameworks, conducting model risk assessments, and ensuring compliance with emerging AI regulations. Our approach integrates traditional cybersecurity with AI-specific controls, leveraging automated testing, model monitoring, and adversarial robustness techniques. The team serves as trusted advisors to organizations navigating AI regulatory requirements, security certifications, and responsible AI implementation. Responsibilities : Lead technical teams in executing AI security assessments, model audits, and compliance reviews related to AI Act (EU), NIST AI RMF, ISO/IEC 23053/23894, and emerging AI governance standards. Develop AI risk assessment methodologies and implement continuous monitoring solutions for production ML systems. Design and implement secure AI/ML architectures with MLOps security, including model versioning, data lineage tracking, feature store security, and secure deployment pipelines. Integrate security controls for LLMs (prompt injection prevention, output filtering, embedding security). Conduct technical assessments of AI/ML systems using tools such as ART, Foolbox, CleverHans; MLflow, Kubeflow, SageMaker, Azure ML, Vertex AI; Evidently AI, Fiddler AI, WhyLabs, Neptune.ai; Guardrails AI, NeMo Guardrails, LangChain security modules, OWASP LLM Top 10; PySyft, TensorFlow Privacy, Opacus. Implement AI compliance and governance solutions addressing EU AI Act, Canada's AIDA, US AI Executive Orders, Singapore Model AI Governance; standards including ISO/IEC 23053/23894, IEEE 7000, NIST AI RMF; sector-specific: FDA AI/ML, GDPR Article 22. Develop and execute penetration testing for AI systems (model extraction, data poisoning, membership inference, prompt injection, jailbreaking, backdoor detection). Program and deploy security solutions using Python (PyTorch, TensorFlow, scikit-learn), R, Julia; frameworks: Hugging Face Transformers, LangChain, LlamaIndex; security libraries: SHAP, LIME; bias tools: Fairlearn, AIF360; infrastructure: Docker, Kubernetes, Terraform. Integrate AI security with Zero Trust, IAM, SIEM; implement automated compliance monitoring using SOAR tools (e.g., Splunk Phantom, Cortex XSOAR). Assess risks in foundation models, federated learning, edge AI, multi-modal and Generative AI applications. Create technical documentation including AI system security architecture reviews, threat models for ML pipelines, compliance mappings, and remediation roadmaps aligned with traditional and AI-specific frameworks. Travel up to 15% to client sites and assessment locations.
Qualifications
5+ years in AI/ML development, deployment, or security assessment 3+ years in information security with focus on application or cloud security Hands-on experience with AI/ML frameworks (TensorFlow, PyTorch, scikit-learn, Hugging Face) Proficiency in Python with AI/ML libraries and security testing tools Experience with cloud AI platforms (AWS SageMaker, Azure ML, Google Vertex AI, Databricks) Knowledge of AI compliance frameworks: NIST AI RMF, EU AI Act, ISO/IEC 23053/23894 Experience with MLOps and secure model deployment Understanding of adversarial ML and AI security threats (OWASP ML Top 10, ATLAS) Familiarity with privacy-preserving ML techniques (differential privacy, federated learning, homomorphic encryption basics) Experience with Docker and Kubernetes; infrastructure as code Knowledge of NIST CSF, NIST 800-53, ISO 27001 Eligibility to obtain a USG security clearance
Preferred Certifications
AI/ML: AWS Certified Machine Learning, Google Cloud Professional ML Engineer, Azure AI Engineer Security: CISSP, CCSP, CompTIA Security+, CEH Specialized: GIAC AI Security Essentials (GAISE), Certified AI Auditor (when available)
Your journey at A&M
We prioritize development, well-being, and opportunity. We offer training, mentorship, and a meritocratic culture with growth opportunities. Details on benefits are provided during recruiting.
The salary range is $115,000 - $155,000 annually, plus a discretionary bonus based on performance. Benefits include healthcare, retirement plans, PTO, holidays, and parental leave. Some references to regulations and non-discrimination policies are included for compliance purposes.
Equal Opportunity Employer
Alvarez & Marsal is an Equal Opportunity Employer. We do not discriminate on race, color, religion, sex, national origin, disability, or any other protected characteristic. See policy statements and regional information for details.
#J-18808-Ljbffr
Manager, AI Security & Compliance - Cybersecurity Governance Join to apply for the
Manager, AI Security & Compliance - Cybersecurity Governance
role at
Alvarez & Marsal 4 days ago Be among the first 25 applicants About Alvarez & Marsal
Alvarez & Marsal (A&M) is a global consulting firm with over 10,000 professionals in over 40 countries. We solve client problems hands-on and value independent thinkers and doers who positively impact our clients and industry. Our culture emphasizes Integrity, Quality, Objectivity, Fun, Personal Reward, and Inclusive Diversity.
What you will contribute
With the rapid adoption of AI technologies and evolving regulatory landscape, demand for AI-focused security analysis and compliance expertise is growing. Our team supports organizations, investors and counsel in identifying, assessing, and mitigating risks associated with AI system deployment, algorithmic bias, data privacy, and model security. We focus on implementing secure AI/ML pipelines, establishing AI governance frameworks, conducting model risk assessments, and ensuring compliance with emerging AI regulations. Our approach integrates traditional cybersecurity with AI-specific controls, leveraging automated testing, model monitoring, and adversarial robustness techniques. The team serves as trusted advisors to organizations navigating AI regulatory requirements, security certifications, and responsible AI implementation. Responsibilities : Lead technical teams in executing AI security assessments, model audits, and compliance reviews related to AI Act (EU), NIST AI RMF, ISO/IEC 23053/23894, and emerging AI governance standards. Develop AI risk assessment methodologies and implement continuous monitoring solutions for production ML systems. Design and implement secure AI/ML architectures with MLOps security, including model versioning, data lineage tracking, feature store security, and secure deployment pipelines. Integrate security controls for LLMs (prompt injection prevention, output filtering, embedding security). Conduct technical assessments of AI/ML systems using tools such as ART, Foolbox, CleverHans; MLflow, Kubeflow, SageMaker, Azure ML, Vertex AI; Evidently AI, Fiddler AI, WhyLabs, Neptune.ai; Guardrails AI, NeMo Guardrails, LangChain security modules, OWASP LLM Top 10; PySyft, TensorFlow Privacy, Opacus. Implement AI compliance and governance solutions addressing EU AI Act, Canada's AIDA, US AI Executive Orders, Singapore Model AI Governance; standards including ISO/IEC 23053/23894, IEEE 7000, NIST AI RMF; sector-specific: FDA AI/ML, GDPR Article 22. Develop and execute penetration testing for AI systems (model extraction, data poisoning, membership inference, prompt injection, jailbreaking, backdoor detection). Program and deploy security solutions using Python (PyTorch, TensorFlow, scikit-learn), R, Julia; frameworks: Hugging Face Transformers, LangChain, LlamaIndex; security libraries: SHAP, LIME; bias tools: Fairlearn, AIF360; infrastructure: Docker, Kubernetes, Terraform. Integrate AI security with Zero Trust, IAM, SIEM; implement automated compliance monitoring using SOAR tools (e.g., Splunk Phantom, Cortex XSOAR). Assess risks in foundation models, federated learning, edge AI, multi-modal and Generative AI applications. Create technical documentation including AI system security architecture reviews, threat models for ML pipelines, compliance mappings, and remediation roadmaps aligned with traditional and AI-specific frameworks. Travel up to 15% to client sites and assessment locations.
Qualifications
5+ years in AI/ML development, deployment, or security assessment 3+ years in information security with focus on application or cloud security Hands-on experience with AI/ML frameworks (TensorFlow, PyTorch, scikit-learn, Hugging Face) Proficiency in Python with AI/ML libraries and security testing tools Experience with cloud AI platforms (AWS SageMaker, Azure ML, Google Vertex AI, Databricks) Knowledge of AI compliance frameworks: NIST AI RMF, EU AI Act, ISO/IEC 23053/23894 Experience with MLOps and secure model deployment Understanding of adversarial ML and AI security threats (OWASP ML Top 10, ATLAS) Familiarity with privacy-preserving ML techniques (differential privacy, federated learning, homomorphic encryption basics) Experience with Docker and Kubernetes; infrastructure as code Knowledge of NIST CSF, NIST 800-53, ISO 27001 Eligibility to obtain a USG security clearance
Preferred Certifications
AI/ML: AWS Certified Machine Learning, Google Cloud Professional ML Engineer, Azure AI Engineer Security: CISSP, CCSP, CompTIA Security+, CEH Specialized: GIAC AI Security Essentials (GAISE), Certified AI Auditor (when available)
Your journey at A&M
We prioritize development, well-being, and opportunity. We offer training, mentorship, and a meritocratic culture with growth opportunities. Details on benefits are provided during recruiting.
The salary range is $115,000 - $155,000 annually, plus a discretionary bonus based on performance. Benefits include healthcare, retirement plans, PTO, holidays, and parental leave. Some references to regulations and non-discrimination policies are included for compliance purposes.
Equal Opportunity Employer
Alvarez & Marsal is an Equal Opportunity Employer. We do not discriminate on race, color, religion, sex, national origin, disability, or any other protected characteristic. See policy statements and regional information for details.
#J-18808-Ljbffr