Logo
Crowe

AI Governance Consulting – Technical Manager

Crowe, San Francisco, California, United States, 94199

Save Job

AI Governance Consulting – Technical Manager At Crowe, the AI Governance Consulting team assists organizations in building, assessing, running, and auditing responsible AI programs. The team aligns AI practices with business objectives, risk appetite, and evolving regulations (e.g., NIST AI RMF 1.0, ISO/IEC 42001, EU AI Act), enabling clients to adopt AI confidently and safely.

As a hands‑on Technical Manager, you will lead independent testing and operational monitoring of AI systems—including Generative AI (GenAI). You’ll design and execute evaluations, create monitoring pipelines, quantify risks (bias, robustness, safety, privacy), and provide transparent reporting to business, risk, and technical stakeholders. You will also mentor consultants and help evolve Crowe’s run‑state accelerators, test harnesses, and control libraries anchored in the NIST AI RMF.

Responsibilities

Independent Testing: Design and execute independent test plans for classical machine learning and LLMs/GenAI covering functional accuracy, robustness, safety, toxicity, jailbreak/prompt‑injection, and hallucination/error rates; define acceptance criteria and go/no‑go recommendations.

Sales Enablement: Partner with teams to qualify opportunities, shape solutions/SOWs/ELs, develop proposals and pricing, and contribute to pipeline reviews. Build client‑ready collateral.

Offering Development: Evolve Crowe’s AI Governance methodologies, accelerators, control libraries, templates, and training; incorporate updates from standards and regulators into playbooks (e.g., NIST’s GenAI profile).

Thought Leadership: Publish insights, speak on webinars/events, and support marketing campaigns to grow brand presence.

People Leadership: Supervise, coach, and develop consultants; manage engagement economics (scope, timeline, budget, quality) and support recruiting.

Bias/Fairness: Plan and run bias/fairness assessments using appropriate population slices and fairness metrics; document mitigations per NIST guidance.

Explainability: Produce model explainability/transparency artifacts (e.g., model cards, method docs) and apply techniques (SHAP, LIME, feature attributions) aligned with NIST’s Four Principles of Explainable AI.

Qualifications Required

3+ years hands‑on AI governance/Responsible AI experience (policy, controls, risk, compliance, or assurance of AI/ML systems).

5+ years in compliance, risk management, and/or professional services/consulting with client‑facing delivery and team leadership.

Strong Python and SQL skills for evaluation pipelines, data preparation, metric computation, and scripting CI jobs.

Demonstrated experience designing fairness/bias tests and applying explainability methods; ability to translate results for non‑technical stakeholders.

Practical knowledge of NIST AI RMF 1.0 (and GenAI profile) and ISO/IEC 42001; awareness of EU AI Act obligations for high‑risk systems.

Progressive responsibility including supervising and reviewing work of others, project management, and self‑management of simultaneous work‑streams.

Strong written and verbal communication in a variety of formats and settings (interviews, meetings, calls, e‑mails, reports, process narratives, presentations).

Networking and relationship management.

Willingness to travel.

Preferred

Experience operationalizing LLM/GenAI evaluations (adversarial/red‑team testing, toxicity/harm scoring, retrieval/grounding, hallucination measurement, safety policies) consistent with NIST guidance.

Hands‑on with ML Ops/observability (e.g., model registries, data validation, drift detection), cloud (AWS/Azure/GCP), and containerization.

Familiarity with governance and compliance platforms (e.g., GRC systems) and collaboration with privacy/security/legal.

Bachelor’s degree required; advanced degree a plus (CS, statistics, data science, information systems, or related).

Certification: AIGP – Artificial Intelligence Governance Professional (IAPP) or equivalent credential in AI governance/privacy/risk (e.g., CIPP/CIPM/CIPT with AI coursework, ISO/IEC 42001 implementer/auditor).

Benefits At Crowe, we offer employees a comprehensive total rewards package and a culture that values diversity and inclusion. Learn more about our benefits and working environment.

How You Can Grow We nurture talent in an inclusive culture that values diversity. You will have regular meetings with a Career Coach to guide you in career goals and aspirations.

Application Deadline 12/05/2025

Salary Range $102,400.00 – $204,100.00 per year

Location San Francisco, CA

Work Authorization All persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification form upon hire. Crowe is not sponsoring for work authorization at this time.

EEO Statement Crowe LLP provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, sexual orientation, gender identity or expression, genetics, national origin, disability or protected veteran status, or any other characteristic protected by federal, state or local laws. Crowe will consider all qualified applicants, including those with criminal histories, in a manner consistent with the requirements of applicable state and local laws. Crowe does not accept unsolicited candidates, referrals or resumes from any staffing agency, recruiting service, sourcing entity or any other third‑party paid service at any time. Any referrals, resumes or candidates submitted to Crowe, or any employee or owner of Crowe without a pre‑existing agreement signed by both parties covering the submission will be considered the property of Crowe, and free of charge.

#J-18808-Ljbffr