Job Title: Security Engineer AI Red Teaming & Threat Analytics
Location: Washington, DC (Hybrid 3 Days Onsite)
Type: 6-Month Contract-to-Hire
Clearance: Must be eligible for Public Trust
Overview:
Our client is seeking a Security Engineer with a strong background in AI security, red teaming, and adversarial testing. This role focuses on securing enterprise LLM platforms such as Copilot, Azure OpenAI, and AWS Bedrock by identifying vulnerabilities, mitigating misuse, and implementing robust threat detection systems.
Key Responsibilities:
Perform adversarial testing and red teaming against AI/LLM systems
Detect and mitigate risks including prompt injection, data leakage, and hallucinations
Build and maintain monitoring pipelines for logging and threat detection
Collaborate with cloud, DevSecOps, and AI engineering teams on risk remediation
Integrate findings with SIEM/SOAR tools and enterprise risk dashboards
Required Skills & Experience:
5+ years of cybersecurity or red teaming experience, ideally with AI/ML exposure
Proficiency in Python and familiarity with machine learning tools
Experience with cloud platforms such as AWS, Azure, or GCP
Strong understanding of access controls, secure data handling, and compliance frameworks
Preferred Qualifications:
Experience with Copilot, Azure OpenAI, or AWS Bedrock
Background in AI threat modeling, hallucination mitigation, or misuse detection
Familiarity with integrating security tools into enterprise cloud environments
#LI-PF1
#J-18808-Ljbffr