Ernst & Young Advisory Services Sdn Bhd
EY - GDS Consulting - AI and DATA - Responsible AI - Senior
Ernst & Young Advisory Services Sdn Bhd, Indiana, Pennsylvania, us, 15705
EY - GDS Consulting - AI and DATA- Responsible AI - Senior
Other locations: Primary Location Only
Date: Jan 6, 2026
Requisition ID: 1638704
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all.
Responsible AI Engineer (Governance & Safety) - Senior
Role summary You will lead
AI governance, safety, and compliance engineering —operationalizing
LLMOps/Agentic Ops , security and privacy controls,
LLM evaluation , and
guardrails/ethics
for production AI systems. The remit spans policy to code: frameworks, controls, tooling, documentation, red teaming, and continuous evaluation.
Key responsibilities
Establish and run
LLMOps/Agentic Ops practices
(lifecycle, approvals, versioning, observability, incident response/playbooks) integrated with platform tooling.
Define and enforce
governance & security
controls (PII protection, data residency, model access, content safety, jailbreak/prompt injection defenses), integrating with enterprise security.
Build
LLM evaluation
pipelines (groundedness/faithfulness, toxicity/PII, bias/fairness, robustness) and quality gates for pre‑prod and ongoing
post deployment
monitoring.
Implement
guardrails/ethics
(policy as code, allow/deny lists, safety filters, red teaming harnesses) and ensure compliant documentation (model cards, data sheets, DPIAs).
Contribute production grade
Python
libraries, policies, and
APIs
that product teams can adopt “as a service”; partner with platform teams on
AWS/Azure
controls.
Must have skills
LLMOps/Agentic Ops
patterns and tooling
Governance/Security
(threat modeling for LLMs, privacy, access controls)
LLM Evaluation
frameworks (human+automatic metrics; eval harnesses)
Guardrails/Ethics
techniques and incident playbooks
Python
API development
for governance services
Good to have
Familiarity with
RAG/advanced agentic AI
to guide safe design choices
Qualifications & experience
B.Tech/M.Tech/MS in CS/EE or equivalent.
4+ years
in Responsible AI Engineering
EY | Building a better working world EY exists to build a better working world, helping to create long‑term value for clients, people and society and build trust in the capital markets.
Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate.
Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
#J-18808-Ljbffr
Date: Jan 6, 2026
Requisition ID: 1638704
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all.
Responsible AI Engineer (Governance & Safety) - Senior
Role summary You will lead
AI governance, safety, and compliance engineering —operationalizing
LLMOps/Agentic Ops , security and privacy controls,
LLM evaluation , and
guardrails/ethics
for production AI systems. The remit spans policy to code: frameworks, controls, tooling, documentation, red teaming, and continuous evaluation.
Key responsibilities
Establish and run
LLMOps/Agentic Ops practices
(lifecycle, approvals, versioning, observability, incident response/playbooks) integrated with platform tooling.
Define and enforce
governance & security
controls (PII protection, data residency, model access, content safety, jailbreak/prompt injection defenses), integrating with enterprise security.
Build
LLM evaluation
pipelines (groundedness/faithfulness, toxicity/PII, bias/fairness, robustness) and quality gates for pre‑prod and ongoing
post deployment
monitoring.
Implement
guardrails/ethics
(policy as code, allow/deny lists, safety filters, red teaming harnesses) and ensure compliant documentation (model cards, data sheets, DPIAs).
Contribute production grade
Python
libraries, policies, and
APIs
that product teams can adopt “as a service”; partner with platform teams on
AWS/Azure
controls.
Must have skills
LLMOps/Agentic Ops
patterns and tooling
Governance/Security
(threat modeling for LLMs, privacy, access controls)
LLM Evaluation
frameworks (human+automatic metrics; eval harnesses)
Guardrails/Ethics
techniques and incident playbooks
Python
API development
for governance services
Good to have
Familiarity with
RAG/advanced agentic AI
to guide safe design choices
Qualifications & experience
B.Tech/M.Tech/MS in CS/EE or equivalent.
4+ years
in Responsible AI Engineering
EY | Building a better working world EY exists to build a better working world, helping to create long‑term value for clients, people and society and build trust in the capital markets.
Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate.
Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
#J-18808-Ljbffr