Beth Israel Lahey Health
Senior Red Team AI Security Engineer (Remote)
Beth Israel Lahey Health, Boston, Massachusetts, us, 02298
Senior Red Team AI Security Engineer (Remote)
When you join the growing BILH team, you’re not just taking a job, you’re making a difference in people’s lives.
A seasoned Red Team professional with experience dealing with AI systems will collaborate with AI researchers, product and project managers, and other engineers to proactively test and improve the resilience of our generative AI systems against real‑world threats, including prompt injection attacks, data poisoning, and bias exploitation. You will also play a key role in driving red‑team best practices, ethical alignment, and safeguarding the integrity of generative AI models.
Essential Duties & Responsibilities
Lead and execute red‑team operations targeting AI/ML systems, simulating adversarial attacks such as prompt injections, data poisoning, model inversion, and evasion techniques to identify vulnerabilities in generative AI, computer vision, or NLP models.
Evaluate AI systems for ethical risks, biases, fairness issues, and harmful content generation; collaborate with trust and safety teams to assess compliance with responsible AI standards and mitigate potential societal impacts.
Develop and deploy custom tools, exploits, and automated testing frameworks for AI security assessments, including infrastructure for adversarial input generation and privacy‑preserving attacks on federated learning environments.
Conduct research on emerging AI threats, publish findings on offensive AI techniques, and integrate AI/ML tools into red‑team methodologies for enhanced threat simulation and anomaly detection.
Mentor junior engineers in AI‑specific red‑team practices, lead small teams during engagements, and collaborate with blue teams to validate detections against AI‑targeted attacks such as jailbreaking large language models.
Stay current on cutting‑edge AI security research, adversarial machine learning techniques, and ethical AI frameworks to ensure robust red‑team practices.
Work closely with machine learning engineers, data scientists, product managers, and AI researchers to evaluate model performance under adversarial conditions and provide actionable recommendations for strengthening AI defenses.
Provide technical support to incident response teams, analyze vulnerabilities during investigations, and assist with corrective measures.
Collaborate with blue teams, purple teams, and broader security groups to stress‑test systems, validate detection mechanisms, and improve enterprise readiness.
Plan, coordinate, and execute full‑lifecycle red‑team operations, including reconnaissance, command and control setup, lateral movement, and adversary emulation to simulate real‑world attacks and test organizational defenses.
Minimum Qualifications
Education: BS preferred.
Certifications: Certified Ethical Hacker (CEH), OSCP, OSEP, or similar offensive security certifications; AI/ML security certifications preferred.
Experience: 3–5+ years in offensive security, penetration testing, or red‑team; 1–2+ years hands‑on experience with AI/ML systems and frameworks.
Skills: Expertise with red‑team tools (Kali Linux, Metasploit, Wireshark, Burp Suite, etc.) and a strong foundation in machine learning fundamentals and model architectures.
Preferred Qualifications & Skills
Understanding of machine learning fundamentals and model architectures.
Experience with AI model interpretability and automation tools.
Traditional penetration testing and red‑team methodologies.
Pay Range $112,320.00 – $145,600.00 USD
Equal Opportunity Employer / Veterans / Disabled
#J-18808-Ljbffr
A seasoned Red Team professional with experience dealing with AI systems will collaborate with AI researchers, product and project managers, and other engineers to proactively test and improve the resilience of our generative AI systems against real‑world threats, including prompt injection attacks, data poisoning, and bias exploitation. You will also play a key role in driving red‑team best practices, ethical alignment, and safeguarding the integrity of generative AI models.
Essential Duties & Responsibilities
Lead and execute red‑team operations targeting AI/ML systems, simulating adversarial attacks such as prompt injections, data poisoning, model inversion, and evasion techniques to identify vulnerabilities in generative AI, computer vision, or NLP models.
Evaluate AI systems for ethical risks, biases, fairness issues, and harmful content generation; collaborate with trust and safety teams to assess compliance with responsible AI standards and mitigate potential societal impacts.
Develop and deploy custom tools, exploits, and automated testing frameworks for AI security assessments, including infrastructure for adversarial input generation and privacy‑preserving attacks on federated learning environments.
Conduct research on emerging AI threats, publish findings on offensive AI techniques, and integrate AI/ML tools into red‑team methodologies for enhanced threat simulation and anomaly detection.
Mentor junior engineers in AI‑specific red‑team practices, lead small teams during engagements, and collaborate with blue teams to validate detections against AI‑targeted attacks such as jailbreaking large language models.
Stay current on cutting‑edge AI security research, adversarial machine learning techniques, and ethical AI frameworks to ensure robust red‑team practices.
Work closely with machine learning engineers, data scientists, product managers, and AI researchers to evaluate model performance under adversarial conditions and provide actionable recommendations for strengthening AI defenses.
Provide technical support to incident response teams, analyze vulnerabilities during investigations, and assist with corrective measures.
Collaborate with blue teams, purple teams, and broader security groups to stress‑test systems, validate detection mechanisms, and improve enterprise readiness.
Plan, coordinate, and execute full‑lifecycle red‑team operations, including reconnaissance, command and control setup, lateral movement, and adversary emulation to simulate real‑world attacks and test organizational defenses.
Minimum Qualifications
Education: BS preferred.
Certifications: Certified Ethical Hacker (CEH), OSCP, OSEP, or similar offensive security certifications; AI/ML security certifications preferred.
Experience: 3–5+ years in offensive security, penetration testing, or red‑team; 1–2+ years hands‑on experience with AI/ML systems and frameworks.
Skills: Expertise with red‑team tools (Kali Linux, Metasploit, Wireshark, Burp Suite, etc.) and a strong foundation in machine learning fundamentals and model architectures.
Preferred Qualifications & Skills
Understanding of machine learning fundamentals and model architectures.
Experience with AI model interpretability and automation tools.
Traditional penetration testing and red‑team methodologies.
Pay Range $112,320.00 – $145,600.00 USD
Equal Opportunity Employer / Veterans / Disabled
#J-18808-Ljbffr