MindFort AI (YC X25)
Senior Security Pentester - AI Tutor
MindFort AI (YC X25), Los Angeles, California, United States, 90079
Senior AI Security Tutor (Penetration Testing Instructor)
MindFort AI is looking for experienced penetration testers to help train and shape our next-generation AI security systems. Your role is to teach, guide, and correct an LLM so it learns to think and operate like a seasoned offensive security engineer. You’ll translate your real‑world testing instincts into structured demonstrations, critiques, and step‑by‑step reasoning that strengthen the AI’s ability to find, explain, and remediate vulnerabilities. Base pay range
$30.00/yr - $110.00/yr You’re a Strong Fit If You
Are located in the United States. Bring a deep foundation in Computer Science, Cybersecurity, or equivalent experience. Have 6+ years of offensive security or penetration testing work across web apps, infrastructure, cloud, or mixed environments. Know the core frameworks and tooling inside out (OWASP, MITRE ATT&CK, Burp Suite, nmap, Metasploit, custom scripts). Have experience designing attack scenarios, threat models, and red‑team style workflows, ideally with exposure to AI/ML systems. Are comfortable breaking down complex exploitation chains into clear reasoning an AI can learn from. Communicate cleanly and precisely in writing, with an emphasis on clarity of risk, root cause, and mitigation. Work with discipline, documenting your process as you go. What You’ll Do
Teach and correct an LLM through structured examples of real penetration testing methodology. Help the AI understand reconnaissance, exploitation, validation, and reporting at a professional level. Provide expert feedback to improve accuracy, depth, and security‑focused reasoning. Collaborate directly with our applied AI engineering team and CTO as we refine training loops. Logistics
Applications are reviewed on a rolling basis. Up to 40 hours of work per week; minimum commitment of 10 hours weekly. Fully remote and asynchronous. Engagement length: approximately 1–3 months. If you’re ready to help build an AI that can think like a world‑class offensive security engineer, we’d love to talk.
#J-18808-Ljbffr
MindFort AI is looking for experienced penetration testers to help train and shape our next-generation AI security systems. Your role is to teach, guide, and correct an LLM so it learns to think and operate like a seasoned offensive security engineer. You’ll translate your real‑world testing instincts into structured demonstrations, critiques, and step‑by‑step reasoning that strengthen the AI’s ability to find, explain, and remediate vulnerabilities. Base pay range
$30.00/yr - $110.00/yr You’re a Strong Fit If You
Are located in the United States. Bring a deep foundation in Computer Science, Cybersecurity, or equivalent experience. Have 6+ years of offensive security or penetration testing work across web apps, infrastructure, cloud, or mixed environments. Know the core frameworks and tooling inside out (OWASP, MITRE ATT&CK, Burp Suite, nmap, Metasploit, custom scripts). Have experience designing attack scenarios, threat models, and red‑team style workflows, ideally with exposure to AI/ML systems. Are comfortable breaking down complex exploitation chains into clear reasoning an AI can learn from. Communicate cleanly and precisely in writing, with an emphasis on clarity of risk, root cause, and mitigation. Work with discipline, documenting your process as you go. What You’ll Do
Teach and correct an LLM through structured examples of real penetration testing methodology. Help the AI understand reconnaissance, exploitation, validation, and reporting at a professional level. Provide expert feedback to improve accuracy, depth, and security‑focused reasoning. Collaborate directly with our applied AI engineering team and CTO as we refine training loops. Logistics
Applications are reviewed on a rolling basis. Up to 40 hours of work per week; minimum commitment of 10 hours weekly. Fully remote and asynchronous. Engagement length: approximately 1–3 months. If you’re ready to help build an AI that can think like a world‑class offensive security engineer, we’d love to talk.
#J-18808-Ljbffr