Logo
Anomali

Senior Engineer, AI Evaluation & Reliability (Agentic AI)

Anomali, Redwood City, California, United States, 94061

Save Job

Senior Engineer, AI Evaluation & Reliability (Agentic AI) Join to apply for the

Senior Engineer, AI Evaluation & Reliability (Agentic AI)

role at

Anomali .

Anomali is headquartered in Silicon Valley and is the leading AI‑powered security operations platform that modernizes security operations. At the center of it is an omnipresent, intelligent, and multilingual Anomali Copilot that automates important tasks and empowers teams to deliver risk insights to management and the board in seconds. The Copilot navigates a proprietary cloud‑native security data lake that consolidates legacy attempts at visibility and provides first‑in‑market speed, scale, and performance while reducing the cost of security analytics. Anomali combines ETL, SIEM, XDR, SOAR, and the largest repository of global intelligence into one efficient platform.

Job Description We're looking for a Senior Engineer, AI Evaluation & Reliability to lead the design and execution of evaluation, quality assurance, and release gating for our agentic AI features. You'll develop the pipelines, datasets, and dashboards that measure and improve agent performance across real‑world SOC workflows—ensuring every release is safe, reliable, efficient, and production‑ready. You will guarantee that our agentic AI features operate at full production scale, ingesting and active on millions of SOC alerts per day, with measurable impact on analyst productivity and risk mitigation. This role partners closely with the Product team to deliver operational excellence and trust in every AI‑driven capability.

Key Responsibilities

Define quality metrics: translate SOC use cases into measurable KPI’s (e.g., precision/recall, MTTR, false‑positive rate, step success, latency/cost budgets)

Build continuous evaluations: develop off‑line/online evaluation pipelines, regression suites, and A/B or canary tests; integrate them into CI/CD for release gating

Curate and manage datasets: maintain gold‑standard datasets and red‑team scenarios; establish data governance and drift monitoring practices

Ensure safety, reliability, and explainability: partner with Platform and Security Research to encode guardrails, policy enforcement, and runtime safety checks

Expand adversarial test coverage (prompt injection, data exfiltration, abuse scenarios)

Ensure explainability and auditability of agent decisions, maintaining traceability and compliance of AI‑driven workflows

Production reliability & observability: monitor and maintain reliability of agentic AI features post‑release—define and uphold SLIs/SLOs, establish alerting and rollback strategies, and conduct incident post‑mortems

Design and implement infrastructure to scale evaluation and production pipelines for real‑time SOC workflows across cloud environments

Drive agentic system engineering: experiment with multi‑agent systems, tool‑using language models, retrieval‑augmented workflows, and prompt orchestration

Manage model and prompt lifecycle—track version, rollout strategies, and fallbacks; measure impact through statistically sound experiments

Collaborate cross‑functionally: work with Product, UX and Engineering to prioritize high‑leverage improvements, resolve regressions quickly, and advance overall system reliability

Qualifications Required Skills and Experience

5+ years building evaluation or testing infrastructure for ML/LLM systems or large‑scale distributed systems

Proven ability to translate product requirements into measurable metrics and test plans

Strong Python skills (or similar language) and experience with modern data tooling

Hands‑on experience running A/B tests, canaries, or experiment frameworks

Experience defining and maintaining operational reliability metrics (SLIs/SLOs) for AI‑driven systems

Familiarity with large‑scale distributed or streaming systems serving AI/agent workflows (millions of events or alerts/day)

Excellent communication skills—able to clearly convey technical results and trade‑offs to engineers, PMs, and analysts

This position is not eligible for employment visa sponsorship. The successful candidate must not now or in the future require visa sponsorship to work in the US.

Preferred Qualifications

Experience evaluating or deploying agentic or tool‑using AI systems (multi‑agent orchestration, retrieval‑augmented reasoning, prompt lifecycle management)

Familiarity with LLM evaluation frameworks (e.g., model‑graded evals, pairwise/rubric scoring, preference learning)

Exposure to AI safety testing, including prompt injection, data exfiltration, abuse taxonomies, and resilience validation

Understanding of explainability and compliance requirements for autonomous workflows, ensuring traceability and auditability of AI behavior

Background in security operations, incident response, or enterprise automation; comfortable interpreting logs, alerts, and playbooks

Startup experience delivering high‑impact systems in fast‑paced, evolving environments

Equal Opportunities Monitoring We are an Equal Opportunity Employer. It is our policy to ensure that all eligible persons have equal opportunity for employment and advancement on the basis of their ability, qualifications, and aptitude. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, pregnancy, genetic information, disability, status at a protected veteran, or any other protected category under applicable federal, state, and local laws. If you are interested in applying for employment with Anomali and need special assistance or accommodation to apply for a posted position, contact our Recruiting team at recruiting@anomali.com.

Compensation Transparency $140,000 - $200,000 USD. Please note that the annual base salary range is a guideline and, for candidates who receive an offer, the base pay will vary based on factors such as work location, qualifications, skills and experience of the candidate. In addition to base pay, this position is eligible for benefits and may be eligible for equity.

We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.

#J-18808-Ljbffr