Logo
Crowe

AI Governance Consulting – Technical Manager

Crowe, Boston, Massachusetts, us, 02298

Save Job

AI Governance Consulting – Technical Manager Join to apply for the

AI Governance Consulting – Technical Manager

role at

Crowe .

At Crowe, you can build a meaningful and rewarding career. With real flexibility to balance work with life moments, you’re trusted to deliver results and make an impact. We embrace you for who you are, care for your well‑being, and nurture your career. Everyone has equitable access to opportunities for growth and leadership. Over our 80‑year history, delivering excellent service through innovation has been core to our DNA across our audit, tax, and consulting teams.

Crowe’s AI Governance Consulting team helps organizations build, assess, run, and audit responsible AI programs. We align AI practices with business goals, risk appetite, and evolving regulations and standards (e.g., NIST AI RMF 1.0, ISO/IEC 42001, EU AI Act), enabling clients to adopt AI confidently and safely.

Responsibilities

Independent Testing: Design and execute independent test plans for classical ML and LLMs/GenAI (functional accuracy, robustness, safety, toxicity, jailbreak/prompt‑injection, hallucination/error rates); define acceptance criteria and go/no‑go recommendations.

Sales Enablement: Partner with teams to qualify opportunities, shape solutions/SOW/ELs, develop proposals and pricing, and contribute to pipeline reviews. Build client‑ready collateral.

Offering Development: Evolve Crowe’s AI Governance methodologies, accelerators, control libraries, templates, and training. Incorporate updates from standards/regulators into our playbooks (e.g., NIST’s GAI profile).

Thought Leadership: Publish insights, speak on webinars/events, and support marketing campaigns to grow brand presence.

People Leadership: Supervise, coach, and develop consultants; manage engagement economics (scope, timeline, budget, quality) and support recruiting.

Bias/Fairness: Plan and run bias/fairness assessments using appropriate population slices and fairness metrics; document mitigations per NIST guidance on identifying/managing bias.

Explainability Evaluation: Produce model explainability/transparency artifacts (e.g., model cards, method docs) and apply techniques (SHAP, LIME, feature attributions) aligned to NIST’s Four Principles of Explainable AI.

Qualifications

3+ years hands‑on AI governance/Responsible AI experience (policy, controls, risk, compliance, or assurance of AI/ML systems).

5+ years in compliance, risk management, and/or professional services/consulting with client‑facing delivery and team leadership.

Strong Python and SQL (evaluation pipelines, data prep, metric computation, scripting CI jobs).

Demonstrated experience designing fairness/bias tests and applying explainability methods; ability to translate results for non‑technical stakeholders.

Practical knowledge of NIST AI RMF 1.0 (and GenAI profile), ISO/IEC 42001, and awareness of EU AI Act obligations for high‑risk systems.

Prior experience should include progressive responsibilities, including supervising and reviewing the work of others, project management, and self‑management of simultaneous work‑streams.

Strong written and verbal communication and comprehension in a variety of formats and settings, including interviews, meetings, calls, e‑mails, reports, process narratives, and presentations.

Networking and relationship management.

Willingness to travel.

Preferred Qualifications

Experience operationalizing LLM/GenAI evaluations (adversarial/red‑team testing, toxicity/harm scoring, retrieval/grounding, hallucination measurement, safety policies) consistent with NIST guidance.

Hands‑on with ML Ops/observability (e.g., model registries, data validation, drift detection), cloud (AWS/Azure/GCP), and containerization.

Familiarity with governance and compliance platforms (e.g., GRC systems) and collaboration with privacy/security/legal.

Bachelor’s degree required; advanced degree a plus (CS, statistics, data science, information systems, or related).

Certification: AIGP – Artificial Intelligence Governance Professional (IAPP) or equivalent credential in AI governance/privacy/risk (e.g., CIPP/CIPM/CIPT with AI coursework, ISO/IEC 42001 implementer/auditor).

Application Deadline The application deadline for this role is 12/12/2025.

Benefits We offer a comprehensive total rewards package that includes health, dental, vision, retirement, paid time off, professional development, and more.

Growth Opportunities We provide an inclusive culture that values diversity and encourages career development through coaching and mentorship.

EEO Statement Crowe LLP provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, sexual orientation, gender identity or expression, genetics, national origin, disability or protected veteran status, or any other characteristic protected by federal, state or local laws.

#J-18808-Ljbffr