Alvarez & Marsal
Senior Associate, AI Security & Compliance-National Security
Alvarez & Marsal, Atlanta, Georgia, United States, 30383
Senior Associate, AI Security & Compliance - National Security
Join to apply for the
Senior Associate, AI Security & Compliance - National Security
role at
Alvarez & Marsal Alvarez & Marsal (A&M) is a global consulting firm with over 10,000 professionals in more than 40 countries. We solve complex problems for clients and empower them to reach their potential. Our culture celebrates independent thinkers and doers who positively impact our clients and shape our industry. We are guided by core values of Integrity, Quality, Objectivity, Fun, Personal Reward, and Inclusive Diversity. Description
With the rapid adoption of AI technologies and evolving regulatory landscape, demand for AI-focused security analysis and compliance expertise is growing exponentially. Our team supports organizations, investors, and counsel in identifying, assessing, and mitigating risks associated with AI system deployment, algorithmic bias, data privacy, and model security. We focus on implementing secure AI/ML pipelines, establishing AI governance frameworks, conducting model risk assessments, and ensuring compliance with emerging AI regulations. Our approach integrates traditional cybersecurity with AI-specific security controls, leveraging automated testing, model monitoring, and adversarial robustness techniques. The team serves as trusted advisors to organizations navigating AI regulatory requirements, security certifications, and responsible AI implementation. Responsibilities
Lead technical teams in executing AI security assessments, model audits, and compliance reviews related to AI Act (EU), NIST AI Risk Management Framework, ISO/IEC 23053/23894, and emerging AI governance standards. Develop AI risk assessment methodologies and implement continuous monitoring solutions for production ML systems. Design and implement secure AI/ML architectures incorporating MLOps security practices, including model versioning, data lineage tracking, feature store security, and secure model deployment pipelines. Integrate security controls for Large Language Models (LLMs), including prompt injection prevention, output filtering, and embedding security. Conduct technical assessments of AI/ML systems using tools such as AI Security Tools: Adversarial Robustness Toolbox (ART), Foolbox, CleverHans; MLOps Platforms: MLflow, Kubeflow, Amazon SageMaker, Azure ML, Google Vertex AI; Model Monitoring: Evidently AI, Fiddler AI, WhyLabs, Neptune.ai; LLM Security: Guardrails AI, NeMo Guardrails, LangChain security modules, OWASP LLM Top 10 tools; Privacy-Preserving ML: PySyft, TensorFlow Privacy, Opacus. Implement AI compliance and governance solutions addressing regulatory frameworks such as EU AI Act, Canada's AIDA, US AI Executive Orders, Singapore's Model AI Governance Framework; industry standards ISO/IEC 23053, ISO/IEC 23894, IEEE 7000 series, NIST AI RMF; sector-specific requirements including FDA AI/ML medical device regulations, GDPR Article 22, SR 11-7 model risk management. Develop and execute penetration testing specifically for AI systems, including model extraction attacks, data poisoning vulnerability assessments, membership inference and model inversion testing, prompt injection and jailbreaking assessments for LLMs, and backdoor detection in neural networks. Program and deploy custom security solutions using languages (Python, R, Julia), AI frameworks (Hugging Face Transformers, LangChain, LlamaIndex, AutoML tools), security libraries (SHAP, LIME, Fairlearn, AIF360), and infrastructure (Docker, Kubernetes, Terraform, SIEM, SOAR platforms). Integrate AI security with Zero Trust architecture, IAM solutions, and SIEM platforms. Assess and mitigate risks in foundation models, federated learning systems, edge AI deployments, multi-modal AI systems, and generative AI applications (GPT, DALL‑E, Stable Diffusion). Create technical documentation including AI system security architecture reviews, threat models specific to ML pipelines, compliance mappings, and remediation roadmaps aligned with NIST 800‑53, ISO 27001, and AI-specific frameworks. Availability for up to 15% travel required to client sites and assessment locations. Qualifications
3+ years of experience in AI/ML development, deployment, or security assessment. 2+ years of experience in information security, with a focus on application security or cloud security. Hands‑on experience with AI/ML frameworks (TensorFlow, PyTorch, scikit‑learn, Hugging Face). Proficiency in Python programming with experience in AI/ML libraries and security testing tools. Experience with cloud AI platforms (AWS SageMaker, Azure ML, Google Vertex AI, Databricks). Knowledge of AI compliance frameworks: NIST AI RMF, EU AI Act requirements, ISO/IEC 23053/23894. Experience with MLOps tools and secure model deployment practices. Understanding of adversarial machine learning and AI security threats (OWASP ML Top 10, ATLAS framework). Familiarity with privacy‑preserving ML techniques (differential privacy, federated learning, homomorphic encryption basics). Experience with containerization (Docker, Kubernetes) and infrastructure as code. Knowledge of traditional security frameworks (NIST CSF, NIST 800‑53, ISO 27001). Ability to obtain a USG security clearance. Preferred Certifications
One or more AI/ML certifications: AWS Certified Machine Learning, Google Cloud Professional ML Engineer, Azure AI Engineer. Security certifications: CISSP, CCSP, CompTIA Security+, CEH. Specialized: GIAC AI Security Essentials (GAISE), Certified AI Auditor (when available). Benefits
We offer competitive benefits and opportunities to support your personal and professional development. A&M recognizes that our people drive our growth, and you will be provided with the best available training and development resources through formalized and on‑the‑job training, as well as networking opportunities with renowned experts. Equal Opportunity Employer
Alvarez & Marsal is an Equal Opportunity Employer. We provide and promote equal opportunity in employment, compensation, and other terms and conditions of employment without discrimination based on race, color, creed, religion, national origin, ancestry, citizenship status, sex or gender, gender identity or gender expression, sexual orientation, marital status, military service and veteran status, physical or mental disability, family medical history, genetic information or other protected characteristics, and comply with all applicable laws.
#J-18808-Ljbffr
Join to apply for the
Senior Associate, AI Security & Compliance - National Security
role at
Alvarez & Marsal Alvarez & Marsal (A&M) is a global consulting firm with over 10,000 professionals in more than 40 countries. We solve complex problems for clients and empower them to reach their potential. Our culture celebrates independent thinkers and doers who positively impact our clients and shape our industry. We are guided by core values of Integrity, Quality, Objectivity, Fun, Personal Reward, and Inclusive Diversity. Description
With the rapid adoption of AI technologies and evolving regulatory landscape, demand for AI-focused security analysis and compliance expertise is growing exponentially. Our team supports organizations, investors, and counsel in identifying, assessing, and mitigating risks associated with AI system deployment, algorithmic bias, data privacy, and model security. We focus on implementing secure AI/ML pipelines, establishing AI governance frameworks, conducting model risk assessments, and ensuring compliance with emerging AI regulations. Our approach integrates traditional cybersecurity with AI-specific security controls, leveraging automated testing, model monitoring, and adversarial robustness techniques. The team serves as trusted advisors to organizations navigating AI regulatory requirements, security certifications, and responsible AI implementation. Responsibilities
Lead technical teams in executing AI security assessments, model audits, and compliance reviews related to AI Act (EU), NIST AI Risk Management Framework, ISO/IEC 23053/23894, and emerging AI governance standards. Develop AI risk assessment methodologies and implement continuous monitoring solutions for production ML systems. Design and implement secure AI/ML architectures incorporating MLOps security practices, including model versioning, data lineage tracking, feature store security, and secure model deployment pipelines. Integrate security controls for Large Language Models (LLMs), including prompt injection prevention, output filtering, and embedding security. Conduct technical assessments of AI/ML systems using tools such as AI Security Tools: Adversarial Robustness Toolbox (ART), Foolbox, CleverHans; MLOps Platforms: MLflow, Kubeflow, Amazon SageMaker, Azure ML, Google Vertex AI; Model Monitoring: Evidently AI, Fiddler AI, WhyLabs, Neptune.ai; LLM Security: Guardrails AI, NeMo Guardrails, LangChain security modules, OWASP LLM Top 10 tools; Privacy-Preserving ML: PySyft, TensorFlow Privacy, Opacus. Implement AI compliance and governance solutions addressing regulatory frameworks such as EU AI Act, Canada's AIDA, US AI Executive Orders, Singapore's Model AI Governance Framework; industry standards ISO/IEC 23053, ISO/IEC 23894, IEEE 7000 series, NIST AI RMF; sector-specific requirements including FDA AI/ML medical device regulations, GDPR Article 22, SR 11-7 model risk management. Develop and execute penetration testing specifically for AI systems, including model extraction attacks, data poisoning vulnerability assessments, membership inference and model inversion testing, prompt injection and jailbreaking assessments for LLMs, and backdoor detection in neural networks. Program and deploy custom security solutions using languages (Python, R, Julia), AI frameworks (Hugging Face Transformers, LangChain, LlamaIndex, AutoML tools), security libraries (SHAP, LIME, Fairlearn, AIF360), and infrastructure (Docker, Kubernetes, Terraform, SIEM, SOAR platforms). Integrate AI security with Zero Trust architecture, IAM solutions, and SIEM platforms. Assess and mitigate risks in foundation models, federated learning systems, edge AI deployments, multi-modal AI systems, and generative AI applications (GPT, DALL‑E, Stable Diffusion). Create technical documentation including AI system security architecture reviews, threat models specific to ML pipelines, compliance mappings, and remediation roadmaps aligned with NIST 800‑53, ISO 27001, and AI-specific frameworks. Availability for up to 15% travel required to client sites and assessment locations. Qualifications
3+ years of experience in AI/ML development, deployment, or security assessment. 2+ years of experience in information security, with a focus on application security or cloud security. Hands‑on experience with AI/ML frameworks (TensorFlow, PyTorch, scikit‑learn, Hugging Face). Proficiency in Python programming with experience in AI/ML libraries and security testing tools. Experience with cloud AI platforms (AWS SageMaker, Azure ML, Google Vertex AI, Databricks). Knowledge of AI compliance frameworks: NIST AI RMF, EU AI Act requirements, ISO/IEC 23053/23894. Experience with MLOps tools and secure model deployment practices. Understanding of adversarial machine learning and AI security threats (OWASP ML Top 10, ATLAS framework). Familiarity with privacy‑preserving ML techniques (differential privacy, federated learning, homomorphic encryption basics). Experience with containerization (Docker, Kubernetes) and infrastructure as code. Knowledge of traditional security frameworks (NIST CSF, NIST 800‑53, ISO 27001). Ability to obtain a USG security clearance. Preferred Certifications
One or more AI/ML certifications: AWS Certified Machine Learning, Google Cloud Professional ML Engineer, Azure AI Engineer. Security certifications: CISSP, CCSP, CompTIA Security+, CEH. Specialized: GIAC AI Security Essentials (GAISE), Certified AI Auditor (when available). Benefits
We offer competitive benefits and opportunities to support your personal and professional development. A&M recognizes that our people drive our growth, and you will be provided with the best available training and development resources through formalized and on‑the‑job training, as well as networking opportunities with renowned experts. Equal Opportunity Employer
Alvarez & Marsal is an Equal Opportunity Employer. We provide and promote equal opportunity in employment, compensation, and other terms and conditions of employment without discrimination based on race, color, creed, religion, national origin, ancestry, citizenship status, sex or gender, gender identity or gender expression, sexual orientation, marital status, military service and veteran status, physical or mental disability, family medical history, genetic information or other protected characteristics, and comply with all applicable laws.
#J-18808-Ljbffr