Logo
TSR Consulting

AI Security and Controls Consultant

TSR Consulting, New York, New York, us, 10261

Save Job

About TSR:

TSR is a relationship-based, customer-focused IT and technical services staffing company.

For over 40 years TSR, Inc. and its wholly owned subsidiary, TSR Consulting Services, have prospered in the Information Technology staffing business, earning the respect of companies both large and small with well refined candidate screening, timely placement, and a real understanding of the right skill sets required by our clients.

Mission & Vision

We do not believe in building a vision around the company but building a company around our vision, which is simply;

Every employee's voice matters, their effort is appreciated, and their talent is rewarded.

We challenge each employee daily, to raise the bar on how we treat our consultants and candidates. For far too long in this industry, candidates have been ghosted, lied to, or placed at a client and then forgotten about. Each day our staff works tirelessly at qualifying and placing, top talent with our clients, in a compassionate and caring manner.

Not every candidate is a match for the job, but every candidate and consultant will be treated with respect and professionalism.

AI Security and Controls Consultant

Job Description

Location: New York, New York Type: Contract Job #83509 Our client, a leading financial services company is hiring an

AI Security and Controls Consultant

on a long-term contract basis. Job ID 83509

Work Location: New York, NY - Hybrid Summary: We're seeking someone to join our team as a consultant to work in the technology audit team, within Internal Audit, to manage/execute risk-based assurance activities for Firms use of GenAI or Artificial Intelligence in general. Responsibilities:

Conduct Model Audits: Execute a wide range of assurance activities focused on the controls, governance, and risk management of generative AI models used within the organization. Model Security & Privacy Reviews: Review and assess privacy controls, data protection measures, and security protocols applied to AI models, including data handling, access management, and compliance with regulatory standards. Familiarity with GenAI Model: Good understanding of current and upcoming GenAI models. Adopt New Audit Tools: Stay current with and implement new audit tools and techniques relevant to AI/ML systems, including model interpretability, fairness, and robustness assessment tools. Risk Communication: Develop clear and concise messages regarding risks and business impact related to AI models, including model bias, drift, and security vulnerabilities. Data-Driven Analysis: Identify, collect, and analyze data relevant to model performance, privacy, and security, leveraging both structured and unstructured sources. Control Testing: Test controls over AI model development, deployment, monitoring, and lifecycle management, including data lineage, model versioning, and access controls. Issue Identification: Identify control gaps and open risks, raise insightful questions to identify root causes and business impact, and draw appropriate conclusions. Required Skills:

Experience: At least 3-4+ years relevant experience in technology audit, AI/ML, data privacy, or information security. Audit Knowledge: Understanding of audit principles, tools, and processes (risk assessments, planning, testing, reporting, and continuous monitoring), with a focus on AI/ML systems. Communication: Ability to communicate clearly and concisely, adapting messages for technical and non-technical audiences. Analytical Skills: Ability to identify patterns, anomalies, and risks in model behaviors and data. Certifications: CISA, CISSP, or relevant AI/ML certifications (preferred, not required). Technical Knowledge: Strong understanding of AI/ML model development and deployment processes and model interpretability, fairness, and robustness concepts

Education: Masters or bachelor's degree (Computer Science, Data Science, Information Security, or related field preferred).

Pay: $94-$123 per hour.