Logo
LanceSoft Inc

AI Security and Controls Subject Matter Expert

LanceSoft Inc, New York, New York, us, 10261

Save Job

Overview

Hybrid 3 days a week onsite. AI Security and Controls Subject Matter Expert to design and execute an AI assurance strategy, risk and control matrix, guidance. What you’ll do in the role:

Conduct Model Audits: Execute a wide range of assurance activities focused on the controls, governance, and risk management of generative AI models used within the organisation. Model Security & Privacy Reviews: Review and assess privacy controls, data protection measures, and security protocols applied to AI models, including data handling, access management, and compliance with regulatory standards. Familiarity with GenAI Model: Good understanding of current and upcoming GenAI models. Adopt New Audit Tools: Stay current with and implement new audit tools and techniques relevant to AI/ML systems, including model interpretability, fairness, and robustness assessment tools. Risk Communication: Develop clear and concise messages regarding risks and business impact related to AI models, including model bias, drift, and security vulnerabilities. Data-Driven Analysis: Identify, collect, and analyse data relevant to model performance, privacy, and security, leveraging both structured and unstructured sources. Control Testing: Test controls over AI model development, deployment, monitoring, and lifecycle management, including data lineage, model versioning, and access controls. Issue Identification: Identify control gaps and open risks, raise insightful questions to identify root causes and business impact, and draw appropriate conclusions. About Internal Audit

The Internal Audit Department (IAD) reports directly to the Board Audit Committee, and is an objective and independent function within Client’s risk management framework. IAD assists senior management and the Audit Committee of the Board (Client) in the effective discharge of their legal, fiduciary and oversight responsibilities. IAD comprises over 400 employees globally and is responsible for providing independent assurance on the quality and effectiveness of Client’s system of internal control, including risk management and governance systems and processes. IAD also serves as an objective and independent function within the Firm’s risk management framework to foster continual improvement of risk management processes by identifying and assessing operating risks, and evaluating the adequacy and effectiveness of the Firm’s related internal controls. In doing so, we help drive Firm resources to vulnerabilities. What you’ll bring to the role:

Experience: At least 3-4 years’ relevant experience in technology audit, AI/ML, data privacy, or information security. Audit Knowledge: Understanding of audit principles, tools, and processes (risk assessments, planning, testing, reporting, and continuous monitoring), with a focus on AI/ML systems. Communication: Ability to communicate clearly and concisely, adapting messages for technical and non-technical audiences. Analytical Skills: Ability to identify patterns, anomalies, and risks in model behaviour and data. Education

Master’s or bachelor’s degree (Computer Science, Data Science, Information Security, or related field preferred). Certifications

CISA, CISSP, or relevant AI/ML certifications (preferred, not required). Technical Knowledge

AI/ML model development and deployment processes Model interpretability, fairness, and robustness concepts Privacy frameworks (e.g., GDPR, CCPA) Security standards (e.g., NIST, ISO 27001/02) Data governance and protection practices

#J-18808-Ljbffr