Kanak Elite Services
REMOTE ROLE :: Security Specialist, Gen AI
Kanak Elite Services, Trenton, New Jersey, United States
Security Specialist, Gen AI
This is more of an AI tester role rather than a Security Specialist, who will be joining Santander's Cyber Team. Job Description/Key Responsibilities Adversarial Testing: Design and execute controlled adversarial attacks (prompt injection, input/output evaluation, data exfiltration, misinformation generation) Evaluate GenAI models against known and emerging AI-specific attack vectors. Develop reusable test repositories, scripts, and automation to continuously challenge models. Partner with developers to recommend remediation strategies for discovered vulnerabilities. Threat Monitoring & Intelligence: Continuously monitor the external threat landscape for new GenAI-related attack methods (e.g., malicious prompt engineering, fine-tuned model abuse). Correlate findings with internal AI deployments to identify potential exposure points. Complete assessment of existing technical controls and identify enhancements. Build relationships with threat intelligence providers, industry groups, and government regulators to stay ahead of adversarial AI trends. Cross-Functional Collaboration: Partner with Cybersecurity, SOC, and DevSecOps teams to integrate adversarial testing into the broader enterprise security framework. Collaborate with AI/ML engineering teams to embed adversarial resilience into the development lifecycle ("shift-left" AI security). Provide training and awareness sessions for business units leveraging GenAI. Continuous Improvement & Innovation: Develop custom adversarial testing frameworks tailored to the organization's specific use cases. Evaluate and recommend security tools and platforms for AI model monitoring, testing, and threat detection. Contribute to enterprise AI security strategy by bringing forward new practices, frameworks, and technologies. Must-Have Requirements: 5+ years of experience Hands-on adversarial testing of GenAI systems (prompt injection/jailbreaks, input output evals, data-exfil testing) with actionable remediation Cybersecurity red-team / penetration testing background and strong Python/scripting for automation and test harnesses ML/GenAI fundamentals (LLMs, embeddings, diffusion models) and adversarial ML techniques (model extraction, poisoning, prompt injection) Familiarity with AI security frameworks: NIST AI RMF or MITRE ATLAS or OWASP Top 10 for LLMs Experience with AI/MLOps platforms & integration frameworks (Azure AI or AWS SageMaker; OpenAI API/Hugging Face; LangChain or equivalent) in an enterprise setting Nice-to-Haves: Exposure to governance/risk for AI (model risk, policy alignment) SIEM/SOAR & threat-intel integration and monitoring Experience with building reusable adversarial test repos, scripts, and automation We are an equal opportunities employer and welcome applications from all qualified candidates.
#J-18808-Ljbffr
This is more of an AI tester role rather than a Security Specialist, who will be joining Santander's Cyber Team. Job Description/Key Responsibilities Adversarial Testing: Design and execute controlled adversarial attacks (prompt injection, input/output evaluation, data exfiltration, misinformation generation) Evaluate GenAI models against known and emerging AI-specific attack vectors. Develop reusable test repositories, scripts, and automation to continuously challenge models. Partner with developers to recommend remediation strategies for discovered vulnerabilities. Threat Monitoring & Intelligence: Continuously monitor the external threat landscape for new GenAI-related attack methods (e.g., malicious prompt engineering, fine-tuned model abuse). Correlate findings with internal AI deployments to identify potential exposure points. Complete assessment of existing technical controls and identify enhancements. Build relationships with threat intelligence providers, industry groups, and government regulators to stay ahead of adversarial AI trends. Cross-Functional Collaboration: Partner with Cybersecurity, SOC, and DevSecOps teams to integrate adversarial testing into the broader enterprise security framework. Collaborate with AI/ML engineering teams to embed adversarial resilience into the development lifecycle ("shift-left" AI security). Provide training and awareness sessions for business units leveraging GenAI. Continuous Improvement & Innovation: Develop custom adversarial testing frameworks tailored to the organization's specific use cases. Evaluate and recommend security tools and platforms for AI model monitoring, testing, and threat detection. Contribute to enterprise AI security strategy by bringing forward new practices, frameworks, and technologies. Must-Have Requirements: 5+ years of experience Hands-on adversarial testing of GenAI systems (prompt injection/jailbreaks, input output evals, data-exfil testing) with actionable remediation Cybersecurity red-team / penetration testing background and strong Python/scripting for automation and test harnesses ML/GenAI fundamentals (LLMs, embeddings, diffusion models) and adversarial ML techniques (model extraction, poisoning, prompt injection) Familiarity with AI security frameworks: NIST AI RMF or MITRE ATLAS or OWASP Top 10 for LLMs Experience with AI/MLOps platforms & integration frameworks (Azure AI or AWS SageMaker; OpenAI API/Hugging Face; LangChain or equivalent) in an enterprise setting Nice-to-Haves: Exposure to governance/risk for AI (model risk, policy alignment) SIEM/SOAR & threat-intel integration and monitoring Experience with building reusable adversarial test repos, scripts, and automation We are an equal opportunities employer and welcome applications from all qualified candidates.
#J-18808-Ljbffr