AI Security Tester - Red Team Job at The Cervantes Group in Dallas
The Cervantes Group, Dallas, TX, United States, 75215
Overview
Client Delivery Advisor | IT & Corporate Recruitment
Role Description: The AI Tester will be joining the InfoSec Cyber team and will be responsible for conducting Adversarial Testing, enhancing Threat Monitoring & Intelligence, all contributing to continuous improvement and innovation within our banking client’s Cyber red team. This person will be participating in the hands-on testing of GenAI systems with ML/GenAI fundamentals (LLMs, embeddings, diffusion models) and adversarial ML techniques (model extraction, poisoning, prompt injection). The ideal person will have experience enhancing AI model monitoring, testing, and threat detection.
Responsibilities
- Design and execute controlled adversarial attacks (prompt injection, input/output evaluation, data exfiltration, misinformation generation)
- Evaluate GenAI models against known and emerging AI-specific attack vectors.
- Develop reusable test repositories, scripts, and automation to continuously challenge models.
- Partner with developers to recommend remediation strategies for discovered vulnerabilities.
- Continuously monitor the external threat landscape for new GenAI-related attack methods (e.g., malicious prompt engineering, fine-tuned model abuse).
- Correlate findings with internal AI deployments to identify potential exposure points.
- Build relationships with threat intelligence providers, industry groups, and government regulators to stay ahead of adversarial AI trends.
- Partner with Cybersecurity, SOC, and DevSecOps teams to integrate adversarial testing into the broader enterprise security framework.
- Collaborate with AI/ML engineering teams to embed adversarial resilience into the development lifecycle ("shift-left" AI security).
- Provide training and awareness sessions for business units leveraging GenAI.
- Develop custom adversarial testing frameworks tailored to the organization’s specific use cases.
- Evaluate and recommend security tools and platforms for AI model monitoring, testing, and threat detection.
- Contribute to enterprise AI security strategy by bringing forward new practices, frameworks, etc.
Qualifications
Required Qualifications & Experience
- Bachelor’s Degree is required
- 4+ years’ experience with hands-on adversarial testing of GenAI systems (prompt injection/jailbreaks, input–output evals, data-exfil testing) with actionable remediation
- Cybersecurity red-team / penetration testing background and strong Python/scripting for automation and test harnesses
- ML/GenAI fundamentals (LLMs, embeddings, diffusion models) and adversarial ML techniques (model extraction, poisoning, prompt injection).
- Familiarity with AI security frameworks: NIST AI RMF or MITRE ATLAS or OWASP Top 10 for LLMs
- Experience with AI/MLOps platforms & integration frameworks (Azure AI or AWS SageMaker; OpenAI API/Hugging Face; LangChain or equivalent) in an enterprise setting
Desired Qualifications & Experience
- Exposure to governance/risk for AI (model risk, policy alignment)
- SIEM/SOAR & threat-intel integration and monitoring
- Experience with building reusable adversarial test repos, scripts, and automation
Employment details
- Seniority level: Mid-Senior level
- Employment type: Full-time
- Job function: Engineering and Information Technology
Referrals increase your chances of interviewing at The Cervantes Group by 2x
Notes
We’re not including location-based alerts and ancillary postings in this refined description. Base pay range and other non-core lines have been retained only as part of the role’s detail where appropriate.