Mercor
Remote AI Red-Teamer — Adversarial AI Testing (Advanced); English & Korean - AI
Mercor, Amarillo, Texas, United States, 79161
Location:
Remote; Geography restricted to USA, South Korea
Type:
Full-time or Part-time Contract Work
Fluent Language Skills Required:
English & Korean. Native-level fluency in English and Korean is required for this position.
Why This Role Exists At Mercor, we believe the safest AI is the one that’s already been attacked — by us. We are assembling a red team for this project – human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red team data that makes AI safer for our customers. This project involves reviewing AI outputs that touch on sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional and supported by clear guidelines and wellness resources. Before being exposed to any content, the topics will be clearly communicated.
What You’ll Do
Red team conversational AI models and agents: jailbreaks, prompt injections, misuse cases, bias exploitation, multi-turn manipulation
Generate high-quality human data: annotate failures, classify vulnerabilities, and flag systemic risks
Apply structure: follow taxonomies, benchmarks, and playbooks to keep testing consistent
Document reproducibly: produce reports, datasets, and attack cases customers can act on
Who You Are
You bring prior red teaming experience (AI adversarial work, cybersecurity, socio-technical probing)
You’re curious and adversarial: you instinctively push systems to breaking points
You’re structured: you use frameworks or benchmarks, not just random hacks
You’re communicative: you explain risks clearly to technical and non-technical stakeholders
You’re adaptable: thrive on moving across projects and customers
Nice-to-Have Specialties
Adversarial ML: jailbreak datasets, prompt injection, RLHF / DPO attacks, model extraction
Cybersecurity: penetration testing, exploit development, reverse engineering
Socio-technical risk: harassment / disinfo probing, abuse analysis, conversational AI testing
Creative probing: psychology, acting, writing for unconventional adversarial thinking
What Success Looks Like
You uncover vulnerabilities that automated tests miss
You deliver reproducible artifacts that strengthen customer AI systems
Evaluation coverage expands: more scenarios tested, fewer surprises in production
Mercor customers trust the safety of their AI because you’ve already probed it like an adversary
Why Join Mercor Build experience in human data-driven AI red teaming at the frontier of safety. Play a direct role in making AI systems more robust, safe, and trustworthy.
Compensation:
The contract rate for this project will be aligned with the level of expertise required, the sensitivity of the material, and the scope of work. Competitive rates commensurate with experience.
#J-18808-Ljbffr
Remote; Geography restricted to USA, South Korea
Type:
Full-time or Part-time Contract Work
Fluent Language Skills Required:
English & Korean. Native-level fluency in English and Korean is required for this position.
Why This Role Exists At Mercor, we believe the safest AI is the one that’s already been attacked — by us. We are assembling a red team for this project – human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red team data that makes AI safer for our customers. This project involves reviewing AI outputs that touch on sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional and supported by clear guidelines and wellness resources. Before being exposed to any content, the topics will be clearly communicated.
What You’ll Do
Red team conversational AI models and agents: jailbreaks, prompt injections, misuse cases, bias exploitation, multi-turn manipulation
Generate high-quality human data: annotate failures, classify vulnerabilities, and flag systemic risks
Apply structure: follow taxonomies, benchmarks, and playbooks to keep testing consistent
Document reproducibly: produce reports, datasets, and attack cases customers can act on
Who You Are
You bring prior red teaming experience (AI adversarial work, cybersecurity, socio-technical probing)
You’re curious and adversarial: you instinctively push systems to breaking points
You’re structured: you use frameworks or benchmarks, not just random hacks
You’re communicative: you explain risks clearly to technical and non-technical stakeholders
You’re adaptable: thrive on moving across projects and customers
Nice-to-Have Specialties
Adversarial ML: jailbreak datasets, prompt injection, RLHF / DPO attacks, model extraction
Cybersecurity: penetration testing, exploit development, reverse engineering
Socio-technical risk: harassment / disinfo probing, abuse analysis, conversational AI testing
Creative probing: psychology, acting, writing for unconventional adversarial thinking
What Success Looks Like
You uncover vulnerabilities that automated tests miss
You deliver reproducible artifacts that strengthen customer AI systems
Evaluation coverage expands: more scenarios tested, fewer surprises in production
Mercor customers trust the safety of their AI because you’ve already probed it like an adversary
Why Join Mercor Build experience in human data-driven AI red teaming at the frontier of safety. Play a direct role in making AI systems more robust, safe, and trustworthy.
Compensation:
The contract rate for this project will be aligned with the level of expertise required, the sensitivity of the material, and the scope of work. Competitive rates commensurate with experience.
#J-18808-Ljbffr