Logo
Alvarez & Marsal

Manager, AI Security & Compliance - Cybersecurity Governance

Alvarez & Marsal, Los Angeles, California, United States, 90079

Save Job

Overview

Manager, AI Security & Compliance - Cybersecurity Governance at Alvarez & Marsal. Join to apply for the

Manager, AI Security & Compliance - Cybersecurity Governance

role at

Alvarez & Marsal . With the rapid adoption of AI technologies and evolving regulatory landscape, demand for AI-focused security analysis and compliance expertise is growing. Our team supports organizations, investors and counsel in identifying, assessing, and mitigating risks associated with AI system deployment, algorithmic bias, data privacy, and model security. We focus on implementing secure AI/ML pipelines, establishing AI governance frameworks, conducting model risk assessments, and ensuring compliance with emerging AI regulations. Our approach integrates traditional cybersecurity with AI-specific security controls, leveraging automated testing, model monitoring, and adversarial robustness techniques. The team serves as trusted advisors to organizations navigating AI regulatory requirements, security certifications, and responsible AI implementation.

Responsibilities

Lead technical teams in executing AI security assessments, model audits, and compliance reviews related to AI Act (EU), NIST AI RMF, ISO/IEC 23053/23894, and emerging AI governance standards. Develop AI risk assessment methodologies and implement continuous monitoring solutions for production ML systems.

Design and implement secure AI/ML architectures incorporating MLOps security practices, including model versioning, data lineage tracking, feature store security, and secure model deployment pipelines. Integrate security controls for Large Language Models (LLMs), including prompt injection prevention, output filtering, and embedding security.

Conduct technical assessments of AI/ML systems using tools such as ART, Foolbox, CleverHans for adversarial testing; MLflow, Kubeflow, Amazon SageMaker, Azure ML, Google Vertex AI for ML operations; Evidently AI, Fiddler AI, WhyLabs, Neptune.ai for drift detection and explainability; Guardrails AI, NeMo Guardrails, LangChain security modules, OWASP LLM Top 10 tools; PySyft, TensorFlow Privacy, Opacus for privacy-preserving ML.

Implement AI compliance and governance solutions addressing regulatory frameworks (EU AI Act, Canada’s AIDA, US AI Executive Orders, Singapore's Model AI Governance Framework) and industry standards (ISO/IEC 23053, 23894, IEEE 7000 series, NIST AI RMF).

Develop and execute penetration testing for AI systems, including model extraction attacks and defenses, data poisoning vulnerability assessments, membership inference and model inversion testing, prompt injection and jailbreaking assessments for LLMs, and backdoor detection in neural networks.

Program and deploy security solutions using Python (PyTorch, TensorFlow, scikit-learn), R, Julia; AI frameworks such as Hugging Face Transformers, LangChain, LlamaIndex, AutoML tools; security libraries like SHAP, LIME; bias detection tools like Fairlearn, AIF360; and infrastructure tools including Docker, Kubernetes, Terraform.

Integrate AI security with traditional security frameworks (Zero Trust, IAM, SIEM) and implement automated compliance monitoring using AI-powered security orchestration (SOAR) platforms.

Assess and mitigate risks in foundation models and transfer learning, federated learning, edge AI deployments, multi-modal AI systems, and Generative AI applications (GPT, DALL-E, Stable Diffusion).

Create technical documentation including AI system security architecture reviews, threat models for ML pipelines, compliance mappings, and remediation roadmaps aligned with traditional standards (NIST 800-53, ISO 27001) and AI-specific frameworks.

Availability for up to 15% travel required to client sites and assessment locations.

Qualifications

5+ years of experience in AI/ML development, deployment, or security assessment

3+ years of experience in information security, with focus on application security or cloud security

Hands-on experience with AI/ML frameworks (TensorFlow, PyTorch, scikit-learn, Hugging Face)

Proficiency in Python programming with experience in AI/ML libraries and security testing tools

Experience with cloud AI platforms (AWS SageMaker, Azure ML, Google Vertex AI, Databricks)

Knowledge of AI compliance frameworks: NIST AI RMF, EU AI Act requirements, ISO/IEC 23053/23894

Experience with MLOps tools and secure model deployment practices

Understanding of adversarial machine learning and AI security threats (OWASP ML Top 10, ATLAS framework)

Familiarity with privacy-preserving ML techniques (differential privacy, federated learning, homomorphic encryption basics)

Experience with containerization (Docker, Kubernetes) and infrastructure as code

Knowledge of traditional security frameworks (NIST CSF, NIST 800-53, ISO 27001)

Ability to obtain a USG security clearance

Preferred Certifications

One or more AI/ML certifications: AWS Certified Machine Learning, Google Cloud Professional ML Engineer, Azure AI Engineer

Security certifications: CISSP, CCSP, CompTIA Security+, CEH

Specialized: GIAC AI Security Essentials (GAISE), Certified AI Auditor (when available)

Your journey at A&M We recognize that our people are the driving force behind our success, which is why we prioritize an employee experience that fosters each person’s unique professional and personal development. Our robust performance development process promotes continuous learning, rewards your contributions, and fosters a culture of meritocracy. With top-notch training and on-the-job learning opportunities, you can acquire new skills and advance your career.

We prioritize your well-being, providing benefits and resources to support you on your personal journey. Our people consistently highlight the growth opportunities, our unique, entrepreneurial culture, and the fun we have together as their favorite aspects of working at A&M. The possibilities are endless for high-performing and passionate professionals.

Benefits and Compliance Regular employees working 30 or more hours per week are entitled to participate in Alvarez & Marsal Holdings’ fringe benefits including healthcare plans, flexible spending and savings accounts, life, AD&D, and disability coverages, and a 401(k) retirement savings plan with potential discretionary contributions. Paid time off includes vacation, personal days, up to 72 hours of sick time (prorated), 10 federal holidays, one floating holiday, and parental leave. The amount of vacation and personal days varies by tenure and role type. For details, see A&M benefits information. The salary range is $115,000 - $155,000 annually, with discretionary bonus potential. This role does not require lie detector testing; Massachusetts law regarding lie detector tests is noted here. Inclusive Diversity and Equal Opportunity Employer statements are provided as part of A&M policy statements.

Location and Availability Culver City, CA; Santa Monica, CA; Los Angeles area opportunities may be listed with posted salary ranges. This description reflects the role and context for Alvarez & Marsal’s AI Security & Compliance leadership track.

#J-18808-Ljbffr