Alvarez & Marsal
Manager, AI Security & Compliance - Cybersecurity Governance
Alvarez & Marsal, Boston, Massachusetts, us, 02298
Overview
Manager, AI Security & Compliance - Cybersecurity Governance at Alvarez & Marsal (A&M). This role focuses on AI security analysis, governance, and regulatory compliance for AI systems, including risk assessments, secure AI/ML pipelines, and collaboration with clients across industries. Responsibilities
Lead technical teams in executing AI security assessments, model audits, and compliance reviews related to AI Act (EU), NIST AI Risk Management Framework, ISO/IEC 23053/23894, and emerging AI governance standards. Develop AI risk assessment methodologies and implement continuous monitoring solutions for production ML systems. Design and implement secure AI/ML architectures with MLOps security practices, including model versioning, data lineage tracking, feature store security, and secure deployment pipelines. Integrate security controls for Large Language Models (LLMs) including prompt injection prevention, output filtering, and embedding security. Conduct technical assessments of AI/ML systems using tools and platforms such as Adversarial Robustness Toolbox (ART), Foolbox, CleverHans; MLflow, Kubeflow, SageMaker, Azure ML, Vertex AI; Evidently AI, Fiddler AI, WhyLabs, Neptune.ai; Guardrails AI, NeMo Guardrails, LangChain security modules; PySyft, TensorFlow Privacy, Opacus; and related security libraries for explainability and bias detection. Implement AI compliance and governance solutions addressing regulatory frameworks, industry standards, and sector-specific requirements (EU AI Act, AIDA, US AI Executive Orders, GDPR Article 22; ISO/IEC 23053/23894; NIST AI RMF; FDA AI/ML regulatory considerations). Develop and execute penetration testing for AI systems, including model extraction defenses, data poisoning assessments, membership inference testing, prompt injection and jailbreaking assessments, and backdoor detection in neural networks. Program and deploy security solutions using Python (PyTorch, TensorFlow, scikit-learn), R, Julia; AI frameworks such as Hugging Face Transformers, LangChain, LlamaIndex; security libraries (SHAP, LIME, Fairlearn, AIF360); and infrastructure tools (Docker, Kubernetes, Terraform). Integrate AI security with traditional security frameworks (Zero Trust, IAM, SIEM) and implement automated compliance monitoring using AI-powered SOAR platforms (e.g., Splunk Phantom, Cortex XSOAR). Assess and mitigate risks in foundation models, federated learning, edge AI, multi-modal AI, and Generative AI applications (GPT, DALL-E, Stable Diffusion). Create technical documentation including AI system security architecture reviews, threat models for ML pipelines, compliance mappings, and remediation roadmaps aligned with traditional and AI-specific standards. Availability for up to 15% travel to client sites and assessment locations. Qualifications
5+ years of experience in AI/ML development, deployment, or security assessment 3+ years of experience in information security with focus on application security or cloud security Hands-on experience with AI/ML frameworks (TensorFlow, PyTorch, scikit-learn, Hugging Face) Proficiency in Python programming with AI/ML libraries and security testing tools Experience with cloud AI platforms (AWS SageMaker, Azure ML, Google Vertex AI, Databricks) Knowledge of AI compliance frameworks: NIST AI RMF, EU AI Act requirements, ISO/IEC 23053/23894 Experience with MLOps tools and secure model deployment practices Understanding of adversarial machine learning and AI security threats (OWASP ML Top 10, ATLAS framework) Familiarity with privacy-preserving ML techniques (differential privacy, federated learning, homomorphic encryption basics) Experience with containerization (Docker, Kubernetes) and infrastructure as code Knowledge of traditional security frameworks (NIST CSF, NIST 800-53, ISO 27001) Ability to obtain a USG security clearance Preferred Certifications
One or more AI/ML certifications: AWS Certified Machine Learning, Google Cloud Professional ML Engineer, Azure AI Engineer Security certifications: CISSP, CCSP, CompTIA Security+, CEH Specialized: GIAC AI Security Essentials (GAISE), Certified AI Auditor (when available) Compensation and Benefits
Salary range is $115,000 - $155,000 annually, dependent on education, experience, skills, and geography. A discretionary bonus program is offered based on performance. Benefits include healthcare, retirement plans, paid time off, and other standard programs. The company provides training and development resources and promotes a culture of meritocracy. A&M is an Equal Opportunity Employer and complies with applicable laws. It does not require lie detector tests and has policies regarding unsolicited resumes from third-party recruiters. About Alvarez & Marsal
Alvarez & Marsal (A&M) is a global consulting firm with over 10,000 professionals in over 40 countries. We solve clients’ problems with a hands-on approach and foster a culture of Integrity, Quality, Objectivity, Fun, Personal Reward, and Inclusive Diversity. For more information about benefits, diversity, and the employer policy statements by region, please refer to A&M’s official resources.
#J-18808-Ljbffr
Manager, AI Security & Compliance - Cybersecurity Governance at Alvarez & Marsal (A&M). This role focuses on AI security analysis, governance, and regulatory compliance for AI systems, including risk assessments, secure AI/ML pipelines, and collaboration with clients across industries. Responsibilities
Lead technical teams in executing AI security assessments, model audits, and compliance reviews related to AI Act (EU), NIST AI Risk Management Framework, ISO/IEC 23053/23894, and emerging AI governance standards. Develop AI risk assessment methodologies and implement continuous monitoring solutions for production ML systems. Design and implement secure AI/ML architectures with MLOps security practices, including model versioning, data lineage tracking, feature store security, and secure deployment pipelines. Integrate security controls for Large Language Models (LLMs) including prompt injection prevention, output filtering, and embedding security. Conduct technical assessments of AI/ML systems using tools and platforms such as Adversarial Robustness Toolbox (ART), Foolbox, CleverHans; MLflow, Kubeflow, SageMaker, Azure ML, Vertex AI; Evidently AI, Fiddler AI, WhyLabs, Neptune.ai; Guardrails AI, NeMo Guardrails, LangChain security modules; PySyft, TensorFlow Privacy, Opacus; and related security libraries for explainability and bias detection. Implement AI compliance and governance solutions addressing regulatory frameworks, industry standards, and sector-specific requirements (EU AI Act, AIDA, US AI Executive Orders, GDPR Article 22; ISO/IEC 23053/23894; NIST AI RMF; FDA AI/ML regulatory considerations). Develop and execute penetration testing for AI systems, including model extraction defenses, data poisoning assessments, membership inference testing, prompt injection and jailbreaking assessments, and backdoor detection in neural networks. Program and deploy security solutions using Python (PyTorch, TensorFlow, scikit-learn), R, Julia; AI frameworks such as Hugging Face Transformers, LangChain, LlamaIndex; security libraries (SHAP, LIME, Fairlearn, AIF360); and infrastructure tools (Docker, Kubernetes, Terraform). Integrate AI security with traditional security frameworks (Zero Trust, IAM, SIEM) and implement automated compliance monitoring using AI-powered SOAR platforms (e.g., Splunk Phantom, Cortex XSOAR). Assess and mitigate risks in foundation models, federated learning, edge AI, multi-modal AI, and Generative AI applications (GPT, DALL-E, Stable Diffusion). Create technical documentation including AI system security architecture reviews, threat models for ML pipelines, compliance mappings, and remediation roadmaps aligned with traditional and AI-specific standards. Availability for up to 15% travel to client sites and assessment locations. Qualifications
5+ years of experience in AI/ML development, deployment, or security assessment 3+ years of experience in information security with focus on application security or cloud security Hands-on experience with AI/ML frameworks (TensorFlow, PyTorch, scikit-learn, Hugging Face) Proficiency in Python programming with AI/ML libraries and security testing tools Experience with cloud AI platforms (AWS SageMaker, Azure ML, Google Vertex AI, Databricks) Knowledge of AI compliance frameworks: NIST AI RMF, EU AI Act requirements, ISO/IEC 23053/23894 Experience with MLOps tools and secure model deployment practices Understanding of adversarial machine learning and AI security threats (OWASP ML Top 10, ATLAS framework) Familiarity with privacy-preserving ML techniques (differential privacy, federated learning, homomorphic encryption basics) Experience with containerization (Docker, Kubernetes) and infrastructure as code Knowledge of traditional security frameworks (NIST CSF, NIST 800-53, ISO 27001) Ability to obtain a USG security clearance Preferred Certifications
One or more AI/ML certifications: AWS Certified Machine Learning, Google Cloud Professional ML Engineer, Azure AI Engineer Security certifications: CISSP, CCSP, CompTIA Security+, CEH Specialized: GIAC AI Security Essentials (GAISE), Certified AI Auditor (when available) Compensation and Benefits
Salary range is $115,000 - $155,000 annually, dependent on education, experience, skills, and geography. A discretionary bonus program is offered based on performance. Benefits include healthcare, retirement plans, paid time off, and other standard programs. The company provides training and development resources and promotes a culture of meritocracy. A&M is an Equal Opportunity Employer and complies with applicable laws. It does not require lie detector tests and has policies regarding unsolicited resumes from third-party recruiters. About Alvarez & Marsal
Alvarez & Marsal (A&M) is a global consulting firm with over 10,000 professionals in over 40 countries. We solve clients’ problems with a hands-on approach and foster a culture of Integrity, Quality, Objectivity, Fun, Personal Reward, and Inclusive Diversity. For more information about benefits, diversity, and the employer policy statements by region, please refer to A&M’s official resources.
#J-18808-Ljbffr