Logo
Cargill

Sr Consultant, Data Management & Governance - AI & Data Science

Cargill, Minneapolis, Minnesota, United States, 55400

Save Job

Sr Consultant, Data Management & Governance - AI & Data Science

Cargill’s size and scale allows us to make a positive impact in the world. Our purpose is to nourish the world in a safe, responsible and sustainable way. We are a family company providing food, ingredients, agricultural solutions and industrial products that are vital for living. We connect farmers with markets so they can prosper. We connect customers with ingredients so they can make meals people love. And we connect families with daily essentials — from eggs to edible oils, salt to skincare, feed to alternative fuel. Our 160,000 colleagues, operating in 70 countries, make essential products that touch billions of lives each day. Join us and reach your higher purpose at Cargill. Note:

This description focuses on the Senior Consultant role in AI Data Management & Governance. Some boilerplate or role-related details may be included for context. Overview

The Senior Consultant, AI Data Management & Governance builds risk-assessment methods and control implementation that keep Cargill’s AI ecosystem—MLOps, LLMOps, third-party SaaS assistants, and agentic-process-automation (APA) platforms - safe, compliant, and trustworthy. Working with Security, Legal, Data Privacy, Platform Engineering, and Product teams, you will quantify AI-specific risks (e.g., model bias, prompt injection, tool-calling abuse), set enterprise guardrails, and ensure practical controls are embedded in every AI product and platform service. Key Accountabilities

STRATEGIC PLANNING

Establish AI-specific risk categories (model risk, data-privacy leakage, third-party SaaS exposure, agent autonomy limits). Conduct complex risk assessments that quantify potential business impact and map exposure to the enterprise risk-appetite statement.

POLICY DEVELOPMENT & GOVERNANCE

Help maintenance of the enterprise AI risk register; score and prioritize risks arising from internal models, external APIs (OpenAI, Gemini, Anthropic), and APA tools. Develop due-diligence playbooks for vendor LLMs, SaaS copilots, optimization solvers (Gurobi Cloud), and hosted agent runtimes. Help creation and maintenance of AI Technology Governance Policy, including requirements for data sourcing, model evaluation, prompt safety, and human-in-the-loop review. Align internal standards to external standards like NIST AI RMF, ISO/IEC 42001, and upcoming regional AI Acts (e.g., EU AI Act).

DATA QUALITY

Translate policy into technical controls (e.g., model-card metadata, bias tests, prompt-filter APIs, secrets-management, lineage tracking) and verify deployment in MLOps/LLMOps pipelines. Lead periodic control testing, red-team exercises, and Responsible-AI reviews.

COMPLIANCE & RISK MANAGEMENT

Monitor global AI-related regulations; map new obligations to policy updates and platform backlog items. Coordinate evidence collection for audits and certifications (SOC 2, ISO 27001/42001).

DATA OPERATIONS & STEWARDSHIP

Define key risk indicators (KRIs) and performance metrics (e.g., model-drift incidents, unapproved prompt exceptions, vendor AI SLA breaches).

Qualifications

Minimum requirement of 4 years of relevant work experience. Typically reflects 5 years or more of relevant experience. Preferred

6 years relevant experience 2+ years leading or coaching multidisciplinary teams on emerging-technology risk (AI/ML, cloud SaaS, or automation platforms). Compensation & Benefits

The expected salary for this position is $105,000 - $155,000. Compensation varies depending on a wide array of factors including location, certifications, education, and level of experience. Eligible for a discretionary incentive award based on company and personal performance. Comprehensive benefits program including medical and other benefits dependent on position and hours worked. Minnesota Sick and Safe Leave accruals apply as per policy. Equal Opportunity Employer, including Disability/Vet.

#J-18808-Ljbffr