Crowe
AI Governance Consulting – Senior Consultant
Join to apply for the AI Governance Consulting – Senior Consultant role at Crowe.
Job Description At Crowe’s AI Governance Consulting team, we help organizations build, assess, run, and audit responsible AI programs. We align AI practices with business goals, risk appetite, and evolving regulations and standards (e.g., NIST AI RMF 1.0, ISO/IEC 42001, EU AI Act), enabling clients to adopt AI confidently and safely.
Responsibilities
Client delivery support: Contribute to standing up or maturing AI governance programs (policy, standards, controls, roles/operating model, metrics). Draft deliverables, facilitate workshops, and capture decisions and action items.
Assessment & testing: Help perform current-state assessments, control testing, and readiness reviews for AI/ML and generative AI use cases; summarize findings and recommendations.
Run-state operations: Support intake reviews, model/solution documentation, risk scoring, approvals, model registry updates, monitoring, and issue/incident tracking.
Regulatory & standards research: Monitor and synthesize developments (frameworks and guidance) into clear, client-friendly insights and updates to project materials.
Sales enablement: Assist managers with proposals and SOWs, pricing inputs, meeting notes, and client-ready collateral (decks, one-pagers, POVs, demos).
Offering development: Contribute to playbooks, control libraries, templates, accelerators, and internal training content.
Thought leadership: Draft articles/blogs, presentation materials, and webinar content; support delivery at events.
Project management hygiene: Maintain plans, status reports, and stakeholder communications; elevate risks/issues promptly.
Qualifications Required
2+ years hands‑on experience in AI governance/Responsible AI (policy, risk, controls, compliance, or assurance of AI/ML systems).
3+ years in professional services/consulting, risk management, and/or compliance, including client‑facing work.
Strong analytical, research, and problem-solving skills; able to break down complex topics and communicate clearly in writing and verbally to business and technical audiences.
Experience creating polished deliverables (PowerPoint, Word) and working with spreadsheets for analysis (Excel).
Bachelor’s degree required; advanced degree in a relevant field (information systems, public policy, statistics, law) a plus.
Strong written and verbal communication and comprehension both formally and informally to clients and teams, in a variety of formats and settings.
Certification: AIGP (Artificial Intelligence Governance Professional) or progress toward completion within 12 months; related credentials (e.g., CIPT/CIPP/CIPM, audit/ISO implementer) a plus.
Willingness to travel.
Preferred
Familiarity with AI lifecycle practices (data, model development, evaluation, monitoring, human‑in‑the‑loop) and governance frameworks/standards (e.g., NIST AI RMF, ISO/IEC governance standards, emerging regulations).
Exposure to generative AI risk controls (prompt/data controls, evaluation methods, usage policies).
Experience with GRC/data governance or ML tooling (e.g., ServiceNow/Archer, Collibra/Alation, model registries/ML platforms).
Consulting fundamentals: facilitation, requirements gathering, basic project economics (scope, timeline, budget awareness).
EEO Statement Crowe LLP provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, sexual orientation, gender identity or expression, genetics, national origin, disability or protected veteran status, or any other characteristic protected by federal, state or local laws.
#J-18808-Ljbffr
Job Description At Crowe’s AI Governance Consulting team, we help organizations build, assess, run, and audit responsible AI programs. We align AI practices with business goals, risk appetite, and evolving regulations and standards (e.g., NIST AI RMF 1.0, ISO/IEC 42001, EU AI Act), enabling clients to adopt AI confidently and safely.
Responsibilities
Client delivery support: Contribute to standing up or maturing AI governance programs (policy, standards, controls, roles/operating model, metrics). Draft deliverables, facilitate workshops, and capture decisions and action items.
Assessment & testing: Help perform current-state assessments, control testing, and readiness reviews for AI/ML and generative AI use cases; summarize findings and recommendations.
Run-state operations: Support intake reviews, model/solution documentation, risk scoring, approvals, model registry updates, monitoring, and issue/incident tracking.
Regulatory & standards research: Monitor and synthesize developments (frameworks and guidance) into clear, client-friendly insights and updates to project materials.
Sales enablement: Assist managers with proposals and SOWs, pricing inputs, meeting notes, and client-ready collateral (decks, one-pagers, POVs, demos).
Offering development: Contribute to playbooks, control libraries, templates, accelerators, and internal training content.
Thought leadership: Draft articles/blogs, presentation materials, and webinar content; support delivery at events.
Project management hygiene: Maintain plans, status reports, and stakeholder communications; elevate risks/issues promptly.
Qualifications Required
2+ years hands‑on experience in AI governance/Responsible AI (policy, risk, controls, compliance, or assurance of AI/ML systems).
3+ years in professional services/consulting, risk management, and/or compliance, including client‑facing work.
Strong analytical, research, and problem-solving skills; able to break down complex topics and communicate clearly in writing and verbally to business and technical audiences.
Experience creating polished deliverables (PowerPoint, Word) and working with spreadsheets for analysis (Excel).
Bachelor’s degree required; advanced degree in a relevant field (information systems, public policy, statistics, law) a plus.
Strong written and verbal communication and comprehension both formally and informally to clients and teams, in a variety of formats and settings.
Certification: AIGP (Artificial Intelligence Governance Professional) or progress toward completion within 12 months; related credentials (e.g., CIPT/CIPP/CIPM, audit/ISO implementer) a plus.
Willingness to travel.
Preferred
Familiarity with AI lifecycle practices (data, model development, evaluation, monitoring, human‑in‑the‑loop) and governance frameworks/standards (e.g., NIST AI RMF, ISO/IEC governance standards, emerging regulations).
Exposure to generative AI risk controls (prompt/data controls, evaluation methods, usage policies).
Experience with GRC/data governance or ML tooling (e.g., ServiceNow/Archer, Collibra/Alation, model registries/ML platforms).
Consulting fundamentals: facilitation, requirements gathering, basic project economics (scope, timeline, budget awareness).
EEO Statement Crowe LLP provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, sexual orientation, gender identity or expression, genetics, national origin, disability or protected veteran status, or any other characteristic protected by federal, state or local laws.
#J-18808-Ljbffr