Logo
Alvarez & Marsal

Senior Associate, AI Security & Compliance-National Security

Alvarez & Marsal, Chicago, Illinois, United States, 60290

Save Job

Senior Associate, AI Security & Compliance-National Security About Alvarez & Marsal

Alvarez & Marsal (A&M) is a global consulting firm with over 10,000 entrepreneurial, action and results-oriented professionals in over 40 countries. We take a hands-on approach to solving our clients' problems and assisting them in reaching their potential. Our culture celebrates independent thinkers and doers who positively impact our clients and shape our industry. The collaborative environment and engaging work—guided by A&M's core values of Integrity, Quality, Objectivity, Fun, Personal Reward, and Inclusive Diversity—are why our people love working at A&M.

How You Will Contribute

With the rapid adoption of AI technologies and evolving regulatory landscape, demand for AI-focused security analysis and compliance expertise is growing exponentially. Our team supports organizations, investors and counsel in identifying, assessing, and mitigating risks associated with AI system deployment, algorithmic bias, data privacy, and model security. We focus on implementing secure AI/ML pipelines, establishing AI governance frameworks, conducting model risk assessments, and ensuring compliance with emerging AI regulations. Our approach integrates traditional cybersecurity with AI-specific security controls, leveraging automated testing, model monitoring, and adversarial robustness techniques. The team serves as trusted advisors to organizations navigating AI regulatory requirements, security certifications, and responsible AI implementation.

Responsibilities

Lead technical teams in executing AI security assessments, model audits, and compliance reviews related to AI Act (EU), NIST AI Risk Management Framework, ISO/IEC 23053/23894, and emerging AI governance standards. Develop AI risk assessment methodologies and implement continuous monitoring solutions for production ML systems.

Design and implement secure AI/ML architectures incorporating MLOps security practices, including model versioning, data lineage tracking, feature store security, and secure model deployment pipelines. Integrate security controls for Large Language Models (LLMs), including prompt injection prevention, output filtering, and embedding security.

Conduct technical assessments of AI/ML systems using tools such as:

AI Security Tools: Adversarial Robustness Toolbox (ART), Foolbox, CleverHans for adversarial testing

MLOps Platforms: MLflow, Kubeflow, Amazon SageMaker, Azure ML, Google Vertex AI

Model Monitoring: Evidently AI, Fiddler AI, WhyLabs, Neptune.ai for drift detection and explainability

LLM Security: Guardrails AI, NeMo Guardrails, LangChain security modules, OWASP LLM Top 10 tools

Privacy-Preserving ML: PySyft, TensorFlow Privacy, Opacus for differential privacy implementation

Implement AI compliance and governance solutions addressing:

Regulatory Frameworks: EU AI Act, Canada's AIDA, US AI Executive Orders, Singapore's Model AI Governance Framework

Industry Standards: ISO/IEC 23053, ISO/IEC 23894, IEEE 7000 series, NIST AI RMF

Sector-Specific Requirements: FDA AI/ML medical device regulations, GDPR Article 22 (automated decision-making), SR 11-7 model risk management

Develop and execute penetration testing specifically for AI systems, including:

Model extraction attacks and defenses

Data poisoning vulnerability assessments

Membership inference and model inversion testing

Prompt injection and jailbreaking assessments for LLMs

Backdoor detection in neural networks

Program and deploy custom security solutions using:

Languages: Python (PyTorch, TensorFlow, scikit-learn), R, Julia

AI Frameworks: Hugging Face Transformers, LangChain, LlamaIndex, AutoML tools

Security Libraries: SHAP, LIME for explainability; Fairlearn, AIF360 for bias detection

Infrastructure: Docker, Kubernetes, Terraform for secure AI deployment

Integrate AI security with traditional security frameworks including Zero Trust architecture, IAM solutions, and SIEM platforms. Implement automated compliance monitoring using AI-powered security orchestration tools (SOAR platforms like Splunk Phantom, Palo Alto Cortex XSOAR).

Assess and mitigate risks in:

Foundation models and transfer learning implementations

Federated learning systems

Edge AI deployments

Multi-modal AI systems

Generative AI applications (GPT, DALL-E, Stable Diffusion implementations)

Create technical documentation including AI system security architecture reviews, threat models specific to ML pipelines, compliance mappings, and remediation roadmaps aligned with both traditional security standards (NIST 800-53, ISO 27001) and AI-specific frameworks.

Availability for up to 15% travel required to client sites and assessment locations.

Qualifications

3+ years of experience in AI/ML development, deployment, or security assessment

2+ years of experience in information security, with focus on application security or cloud security

Hands-on experience with AI/ML frameworks (TensorFlow, PyTorch, scikit-learn, Hugging Face)

Proficiency in Python programming with experience in AI/ML libraries and security testing tools

Experience with cloud AI platforms (AWS SageMaker, Azure ML, Google Vertex AI, Databricks)

Knowledge of AI compliance frameworks: NIST AI RMF, EU AI Act requirements, ISO/IEC 23053/23894

Experience with MLOps tools and secure model deployment practices

Understanding of adversarial machine learning and AI security threats (OWASP ML Top 10, ATLAS framework)

Familiarity with privacy-preserving ML techniques (differential privacy, federated learning, homomorphic encryption basics)

Experience with containerization (Docker, Kubernetes) and infrastructure as code

Knowledge of traditional security frameworks (NIST CSF, NIST 800-53, ISO 27001)

Ability to obtain a USG security clearance

Preferred Certifications

One or more AI/ML certifications: AWS Certified Machine Learning, Google Cloud Professional ML Engineer, Azure AI Engineer

Security certifications: CISSP, CCSP, CompTIA Security+, CEH

Specialized: GIAC AI Security Essentials (GAISE), Certified AI Auditor (when available)

Your journey at A&M We recognize that our people are the driving force behind our success, which is why we prioritize an employee experience that fosters each person’s unique professional and personal development. Our robust performance development process promotes continuous learning, rewards your contributions, and fosters a culture of meritocracy. With top-notch training and on-the-job learning opportunities, you can acquire new skills and advance your career.

We prioritize your well-being, providing benefits and resources to support you on your personal journey. Our people consistently highlight the growth opportunities, our unique, entrepreneurial culture, and the fun we have together as their favorite aspects of working at A&M. The possibilities are endless for high-performing and passionate professionals.

Travel Availability for up to 15% travel to client sites and assessment locations.

Equal Opportunity and Compliance A&M is an Equal Opportunity Employer and prohibits discrimination based on race, color, creed, religion, national origin, ancestry, citizenship status, sex, gender, gender identity or gender expression, sexual orientation, marital status, military service, disability, or other protected characteristics. We also prohibit the use of lie detector tests as a condition of employment where prohibited by law. See policy statements and regional information for more details.

Unsolicited Resumes: We do not accept unsolicited resumes from third-party recruiters unless engaged for a specified opening.

#J-18808-Ljbffr