Alvarez & Marsal
Senior Associate, AI Security & Compliance-National Security
Alvarez & Marsal, Los Angeles, California, United States, 90079
Senior Associate, AI Security & Compliance – National Security
Join Alvarez & Marsal to apply for the
Senior Associate, AI Security & Compliance – National Security
role.
About Alvarez & Marsal Alvarez & Marsal (A&M) is a global consulting firm with over 10,000 entrepreneurial, action‑oriented professionals in more than 40 countries. We take a hands‑on approach to solving our clients' problems and help them reach their potential. A&M’s culture celebrates independent thinkers who positively impact our clients and shape our industry. The collaborative environment and engaging work—guided by A&M’s core values of Integrity, Quality, Objectivity, Fun, Personal Reward, and Inclusive Diversity—are why people love working here.
Description With the rapid adoption of AI technologies and evolving regulatory landscape, the demand for AI‑focused security analysis and compliance expertise is growing exponentially. Our team supports organizations, investors, and counsel in identifying, assessing, and mitigating risks associated with AI system deployment, algorithmic bias, data privacy, and model security. We focus on implementing secure AI/ML pipelines, establishing AI governance frameworks, conducting model risk assessments, and ensuring compliance with emerging AI regulations. Our approach integrates traditional cybersecurity with AI‑specific security controls, leveraging automated testing, model monitoring, and adversarial robustness techniques. The team serves as trusted advisors to organizations navigating AI regulatory requirements, security certifications, and responsible AI implementation.
Responsibilities
Lead technical teams in executing AI security assessments, model audits, and compliance reviews related to AI Act (EU), NIST AI Risk Management Framework, ISO/IEC 23053/23894, and emerging AI governance standards. Develop AI risk assessment methodologies and implement continuous monitoring solutions for production ML systems.
Design and implement secure AI/ML architectures incorporating MLOps security practices, including model versioning, data lineage tracking, feature store security, and secure model deployment pipelines. Integrate security controls for Large Language Models (LLMs), including prompt injection prevention, output filtering, and embedding security.
Conduct technical assessments of AI/ML systems using tools such as: Adversarial Robustness Toolbox (ART), Foolbox, CleverHans for adversarial testing
MLflow, Kubeflow, Amazon SageMaker, Azure ML, Google Vertex AI for MLOps platforms
Evidently AI, Fiddler AI, WhyLabs, Neptune.ai for drift detection and explainability
Guardrails AI, NeMo Guardrails, LangChain security modules, OWASP LLM Top 10 tools for LLM security
PySyft, TensorFlow Privacy, Opacus for privacy‑preserving ML
Implement AI compliance and governance solutions addressing: EU AI Act, Canada’s AIDA, U.S. AI Executive Orders, Singapore’s Model AI Governance Framework; ISO/IEC 23053, ISO/IEC 23894, IEEE 7000 series, NIST AI RMF; FDA AI/ML medical device regulations, GDPR Article 22, SR 11‑7 model risk management.
Develop and execute penetration testing specifically for AI systems, including model extraction attacks, data poisoning assessments, membership inference, model inversion, prompt injection, jailbreak assessments, and backdoor detection.
Program and deploy custom security solutions using Python (PyTorch, TensorFlow, scikit‑learn), R, Julia; Hugging Face Transformers, LangChain, LlamaIndex, AutoML tools; Docker, Kubernetes, Terraform for secure AI deployment.
Integrate AI security with traditional security frameworks including Zero Trust architecture, IAM solutions, SIEM platforms, and implement automated compliance monitoring using SOAR platforms such as Splunk Phantom and Palo Alto Cortex XSOAR.
Assess and mitigate risks in foundation models, federated learning, edge AI, multi‑modal AI, generative AI applications.
Create technical documentation: AI system security architecture reviews, threat models for ML pipelines, compliance mappings, remediation roadmaps aligned with NIST 800‑53, ISO 27001, and AI‑specific frameworks.
Travel up to 15 % of the time to client sites and assessment locations.
Who Will You Be Working With At A&M you will collaborate with a diverse team of supportive and motivated professionals sharing deep expertise in investigations, AI governance, and security.
Who You’ll Grow With As an AI Security & Compliance Senior Associate you will gain experience across industries, AI use cases, emerging regulations, and responsible AI practices. You will translate complex AI risks into business‑relevant insights and actionable recommendations, receiving developmental feedback, growth opportunities, and leading technical workstreams on high‑profile projects.
Qualifications
3+ years of experience in AI/ML development, deployment, or security assessment
2+ years of experience in information security, focusing on application or cloud security
Hands‑on experience with AI/ML frameworks (TensorFlow, PyTorch, scikit‑learn, Hugging Face)
Proficiency in Python and AI/ML security testing tools
Experience with cloud AI platforms (AWS SageMaker, Azure ML, Google Vertex AI, Databricks)
Knowledge of AI compliance frameworks (NIST AI RMF, EU AI Act, ISO/IEC 23053/23894)
Experience with MLOps tools and secure model deployment practices
Understanding of adversarial machine learning and AI security threats (OWASP ML Top 10, ATLAS framework)
Familiarity with privacy‑preserving ML techniques (differential privacy, federated learning, homomorphic encryption basics)
Experience with containerization (Docker, Kubernetes) and infrastructure as code
Knowledge of traditional security frameworks (NIST CSF, NIST 800‑53, ISO 27001)
Ability to obtain a U.S. government security clearance
Preferred Certifications
AI/ML certifications: AWS Certified Machine Learning, Google Cloud Professional ML Engineer, Azure AI Engineer
Security certifications: CISSP, CCSP, CompTIA Security+, CEH
Specialized: GIAC AI Security Essentials (GAISE), Certified AI Auditor (when available)
Benefits A&M offers competitive benefits, including healthcare plans, flexible spending accounts, life and disability coverage, 401(k) retirement savings with company match, paid time off, and a discretionary bonus program. We support employee well‑being through training, development resources, networking opportunities, and a culture of meritocracy.
Equal Opportunity Employer Alvarez & Marsal commits to providing and promoting equal opportunity in employment, compensation, and other terms and conditions of employment without discrimination based on race, color, creed, religion, national origin, ancestry, citizenship status, sex or gender, gender identity, sexual orientation, marital status, military service, disability, family medical history, genetic information, or any other protected characteristic. Employees and applicants can find further policy statements by region on the A&M website.
#J-18808-Ljbffr
Senior Associate, AI Security & Compliance – National Security
role.
About Alvarez & Marsal Alvarez & Marsal (A&M) is a global consulting firm with over 10,000 entrepreneurial, action‑oriented professionals in more than 40 countries. We take a hands‑on approach to solving our clients' problems and help them reach their potential. A&M’s culture celebrates independent thinkers who positively impact our clients and shape our industry. The collaborative environment and engaging work—guided by A&M’s core values of Integrity, Quality, Objectivity, Fun, Personal Reward, and Inclusive Diversity—are why people love working here.
Description With the rapid adoption of AI technologies and evolving regulatory landscape, the demand for AI‑focused security analysis and compliance expertise is growing exponentially. Our team supports organizations, investors, and counsel in identifying, assessing, and mitigating risks associated with AI system deployment, algorithmic bias, data privacy, and model security. We focus on implementing secure AI/ML pipelines, establishing AI governance frameworks, conducting model risk assessments, and ensuring compliance with emerging AI regulations. Our approach integrates traditional cybersecurity with AI‑specific security controls, leveraging automated testing, model monitoring, and adversarial robustness techniques. The team serves as trusted advisors to organizations navigating AI regulatory requirements, security certifications, and responsible AI implementation.
Responsibilities
Lead technical teams in executing AI security assessments, model audits, and compliance reviews related to AI Act (EU), NIST AI Risk Management Framework, ISO/IEC 23053/23894, and emerging AI governance standards. Develop AI risk assessment methodologies and implement continuous monitoring solutions for production ML systems.
Design and implement secure AI/ML architectures incorporating MLOps security practices, including model versioning, data lineage tracking, feature store security, and secure model deployment pipelines. Integrate security controls for Large Language Models (LLMs), including prompt injection prevention, output filtering, and embedding security.
Conduct technical assessments of AI/ML systems using tools such as: Adversarial Robustness Toolbox (ART), Foolbox, CleverHans for adversarial testing
MLflow, Kubeflow, Amazon SageMaker, Azure ML, Google Vertex AI for MLOps platforms
Evidently AI, Fiddler AI, WhyLabs, Neptune.ai for drift detection and explainability
Guardrails AI, NeMo Guardrails, LangChain security modules, OWASP LLM Top 10 tools for LLM security
PySyft, TensorFlow Privacy, Opacus for privacy‑preserving ML
Implement AI compliance and governance solutions addressing: EU AI Act, Canada’s AIDA, U.S. AI Executive Orders, Singapore’s Model AI Governance Framework; ISO/IEC 23053, ISO/IEC 23894, IEEE 7000 series, NIST AI RMF; FDA AI/ML medical device regulations, GDPR Article 22, SR 11‑7 model risk management.
Develop and execute penetration testing specifically for AI systems, including model extraction attacks, data poisoning assessments, membership inference, model inversion, prompt injection, jailbreak assessments, and backdoor detection.
Program and deploy custom security solutions using Python (PyTorch, TensorFlow, scikit‑learn), R, Julia; Hugging Face Transformers, LangChain, LlamaIndex, AutoML tools; Docker, Kubernetes, Terraform for secure AI deployment.
Integrate AI security with traditional security frameworks including Zero Trust architecture, IAM solutions, SIEM platforms, and implement automated compliance monitoring using SOAR platforms such as Splunk Phantom and Palo Alto Cortex XSOAR.
Assess and mitigate risks in foundation models, federated learning, edge AI, multi‑modal AI, generative AI applications.
Create technical documentation: AI system security architecture reviews, threat models for ML pipelines, compliance mappings, remediation roadmaps aligned with NIST 800‑53, ISO 27001, and AI‑specific frameworks.
Travel up to 15 % of the time to client sites and assessment locations.
Who Will You Be Working With At A&M you will collaborate with a diverse team of supportive and motivated professionals sharing deep expertise in investigations, AI governance, and security.
Who You’ll Grow With As an AI Security & Compliance Senior Associate you will gain experience across industries, AI use cases, emerging regulations, and responsible AI practices. You will translate complex AI risks into business‑relevant insights and actionable recommendations, receiving developmental feedback, growth opportunities, and leading technical workstreams on high‑profile projects.
Qualifications
3+ years of experience in AI/ML development, deployment, or security assessment
2+ years of experience in information security, focusing on application or cloud security
Hands‑on experience with AI/ML frameworks (TensorFlow, PyTorch, scikit‑learn, Hugging Face)
Proficiency in Python and AI/ML security testing tools
Experience with cloud AI platforms (AWS SageMaker, Azure ML, Google Vertex AI, Databricks)
Knowledge of AI compliance frameworks (NIST AI RMF, EU AI Act, ISO/IEC 23053/23894)
Experience with MLOps tools and secure model deployment practices
Understanding of adversarial machine learning and AI security threats (OWASP ML Top 10, ATLAS framework)
Familiarity with privacy‑preserving ML techniques (differential privacy, federated learning, homomorphic encryption basics)
Experience with containerization (Docker, Kubernetes) and infrastructure as code
Knowledge of traditional security frameworks (NIST CSF, NIST 800‑53, ISO 27001)
Ability to obtain a U.S. government security clearance
Preferred Certifications
AI/ML certifications: AWS Certified Machine Learning, Google Cloud Professional ML Engineer, Azure AI Engineer
Security certifications: CISSP, CCSP, CompTIA Security+, CEH
Specialized: GIAC AI Security Essentials (GAISE), Certified AI Auditor (when available)
Benefits A&M offers competitive benefits, including healthcare plans, flexible spending accounts, life and disability coverage, 401(k) retirement savings with company match, paid time off, and a discretionary bonus program. We support employee well‑being through training, development resources, networking opportunities, and a culture of meritocracy.
Equal Opportunity Employer Alvarez & Marsal commits to providing and promoting equal opportunity in employment, compensation, and other terms and conditions of employment without discrimination based on race, color, creed, religion, national origin, ancestry, citizenship status, sex or gender, gender identity, sexual orientation, marital status, military service, disability, family medical history, genetic information, or any other protected characteristic. Employees and applicants can find further policy statements by region on the A&M website.
#J-18808-Ljbffr