Logo
Alvarez & Marsal

Manager, AI Security & Compliance - Cybersecurity Governance

Alvarez & Marsal, New York, New York, us, 10261

Save Job

Manager, AI Security & Compliance - Cybersecurity Governance Alvarez & Marsal (A&M) is a global consulting firm dedicated to solving client problems and driving potential. A&M’s culture celebrates independent thinkers, doers, and a commitment to integrity, quality, objectivity, fun, personal reward, and inclusive diversity.

How You Will Contribute With rapid AI adoption and evolving regulatory landscape, demand grows for AI-focused security analysis and compliance expertise. The team supports organizations, investors, and counsel in identifying, assessing, and mitigating risks associated with AI deployment, algorithmic bias, data privacy, and model security. We implement secure AI/ML pipelines, establish AI governance frameworks, conduct model risk assessments, and ensure compliance with emerging AI regulations.

Responsibilities

Lead technical teams in executing AI security assessments, model audits, and compliance reviews related to AI Act (EU), NIST AI Risk Management Framework, ISO/IEC 23053/23894, and emerging AI governance standards. Develop AI risk assessment methodologies and implement continuous monitoring solutions for production ML systems.

Design and implement secure AI/ML architectures incorporating MLOps security practices, including model versioning, data lineage tracking, feature store security, and secure model deployment pipelines. Integrate security controls for Large Language Models (LLMs), including prompt injection prevention, output filtering, and embedding security.

Conduct technical assessments of AI/ML systems using tools such as

Adversarial Robustness Toolbox (ART) ,

Foolbox ,

CleverHans ,

MLflow ,

Kubeflow ,

Amazon SageMaker ,

Azure ML ,

Google Vertex AI ,

Evidently AI ,

WhyLabs ,

NeoGuardrails ,

LangChain security modules ,

PySyft ,

TensorFlow Privacy ,

Opacus , etc.

Implement AI compliance and governance solutions addressing regulatory frameworks (EU AI Act, Canada AIDA, US AI Executive Orders, Singapore Model AI Governance Framework) and industry standards (ISO/IEC 23053, ISO/IEC 23894, IEEE 7000 series, NIST AI RMF).

Develop and execute penetration testing for AI systems, including model extraction attacks, data poisoning vulnerability assessments, membership inference and model inversion testing, prompt injection and jailbreaking assessments, and backdoor detection.

Program and deploy custom security solutions using languages such as Python, R, Julia; frameworks like Hugging Face Transformers, LangChain, LlamaIndex; security libraries SHAP, LIME, Fairlearn, AIF360; infrastructure Docker, Kubernetes, Terraform.

Integrate AI security with traditional security frameworks including Zero Trust architecture, IAM solutions, SIEM platforms, and SOAR platforms (Splunk Phantom, Palo Alto Cortex XSOAR).

Assess and mitigate risks in foundation models, federated learning, edge AI deployments, multi-modal AI systems, and generative AI applications (GPT, DALL‑E, Stable Diffusion).

Create technical documentation including AI system security architecture reviews, threat models, compliance mappings, and remediation roadmaps aligned with NIST 800‑53, ISO 27001, and AI‑specific frameworks.

Availability for up to 15% travel to client sites and assessment locations.

Qualifications

5+ years of experience in AI/ML development, deployment, or security assessment.

3+ years of experience in information security, focusing on application security or cloud security.

Hands‑on experience with AI/ML frameworks (TensorFlow, PyTorch, scikit‑learn, Hugging Face).

Proficiency in Python programming with AI/ML libraries and security testing tools.

Experience with cloud AI platforms (AWS SageMaker, Azure ML, Google Vertex AI, Databricks).

Knowledge of AI compliance frameworks (NIST AI RMF, EU AI Act, ISO/IEC 23053/23894).

Experience with MLOps tools and secure model deployment practices.

Understanding of adversarial machine learning and AI security threats (OWASP ML Top 10, ATLAS framework).

Familiarity with privacy‑preserving ML techniques (differential privacy, federated learning, homomorphic encryption basics).

Experience with containerization (Docker, Kubernetes) and infrastructure as code.

Knowledge of traditional security frameworks (NIST CSF, NIST 800‑53, ISO 27001).

Ability to obtain a USG security clearance.

Preferred Certifications

AI/ML certifications: AWS Certified Machine Learning, Google Cloud Professional ML Engineer, Azure AI Engineer.

Security certifications: CISSP, CCSP, CompTIA Security+, CEH.

Specialized certifications: GIAC AI Security Essentials (GAISE), Certified AI Auditor.

Benefits & Compensation Salary range: $115,000 – $155,000 annually, with a discretionary bonus program based on individual and firm performance. A&M offers comprehensive fringe benefits including healthcare plans, flexible spending accounts, life insurance, AD&D, disability coverage, a 401(k) retirement savings plan with employer match, paid vacation, personal days, sick time, federal holidays, floating holidays, and parental leave.

Equal Opportunity Employer Alvarez & Marsal is an equal opportunity employer. We provide and promote equal opportunity in employment, compensation, and other terms and conditions of employment without discrimination based on race, color, creed, religion, national origin, ancestry, citizenship status, sex, gender identity or expression, sexual orientation, marital status, military service, veteran status, physical or mental disability, family medical history, genetic information, or any other protected characteristic. For policy statements and additional information by region, visit the A&M website.

#J-18808-Ljbffr