Jobs via Dice
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Cerebra Consulting Inc., is a System Integrator and IT Services Solution provider focusing on Big Data, Business Analytics, Cloud Solutions, Amazon Web Services, Salesforce, Oracle EBS, Peoplesoft, Hyperion, Oracle Configurator, Oracle CPQ, Oracle PLM, and Custom Application Development.
GenAI Engineer Fraud Team (PoC Development) Location:
Newark, DE (preferred) or Pennington, NJ (4 days on-site required).
Experience Level:
1–2+ Years.
Employment Type:
Contract (12–18 months).
Interview Process:
Video interview followed by on‑site interview (Newark, DE (preferred) or Pennington, NJ).
Overview Our client's Fraud Team is seeking a motivated and technically skilled GenAI Engineer to support the development of a proof of concept (PoC) focused on integrating large language models (LLMs) into enterprise fraud detection systems. This hands‑on role is ideal for candidates with 1–2 years of experience taking GenAI‑based products into production, particularly within an on‑premises environment.
The team has a strong foundation in AI and machine learning and is now expanding into generative AI applications, including chatbots and retrieval‑augmented generation (RAG) systems. You’ll be part of a forward‑thinking group that builds secure, scalable AI solutions within the bank’s internal network infrastructure.
Key Responsibilities
Collaborate with the fraud team to build and refine PoC solutions using GenAI technologies.
Integrate selected LLMs into the client’s enterprise platforms via a secure, on‑premises network layer.
Develop and enforce model controls and guardrails to ensure compliance and audit readiness.
Assist in transitioning PoC models into production within the bank’s enterprise systems.
Support chatbot development initiatives and contribute to future conversational AI projects.
Work closely with internal stakeholders to ensure solutions align with fraud detection goals and enterprise standards.
Required Qualifications
Experience:
1–2 years of hands‑on experience deploying GenAI or ML‑based products into production.
Background in traditional machine learning with a transition into generative AI.
Technical Skills:
Proficiency in Python and ML libraries such as Scikit‑learn (primary), NumPy, Pandas, Matplotlib, TensorFlow, or PyTorch.
Experience working with LLMs such as LLaMA‑3, LLaMA‑4, GPT‑4, or GPT‑5.
Familiarity with GenAI frameworks and tools like LangChain, LangGraph, Lanbase, or LlamaIndex.
Understanding of Retrieval‑Augmented Generation (RAG) techniques.
Exposure to OpenAI APIs or similar LLM providers.
Other Requirements:
Must be comfortable working in an on‑premises environment (no cloud tools).
Experience with chatbot development is a strong plus.
Ability to work in a highly regulated, security‑conscious enterprise setting.
Preferred Attributes
Strong problem‑solving skills and ability to work independently on PoC development.
Excellent communication and collaboration skills.
Familiarity with enterprise AI governance and audit processes.
Please share the profiles at or call me at
.
#J-18808-Ljbffr
GenAI Engineer Fraud Team (PoC Development) Location:
Newark, DE (preferred) or Pennington, NJ (4 days on-site required).
Experience Level:
1–2+ Years.
Employment Type:
Contract (12–18 months).
Interview Process:
Video interview followed by on‑site interview (Newark, DE (preferred) or Pennington, NJ).
Overview Our client's Fraud Team is seeking a motivated and technically skilled GenAI Engineer to support the development of a proof of concept (PoC) focused on integrating large language models (LLMs) into enterprise fraud detection systems. This hands‑on role is ideal for candidates with 1–2 years of experience taking GenAI‑based products into production, particularly within an on‑premises environment.
The team has a strong foundation in AI and machine learning and is now expanding into generative AI applications, including chatbots and retrieval‑augmented generation (RAG) systems. You’ll be part of a forward‑thinking group that builds secure, scalable AI solutions within the bank’s internal network infrastructure.
Key Responsibilities
Collaborate with the fraud team to build and refine PoC solutions using GenAI technologies.
Integrate selected LLMs into the client’s enterprise platforms via a secure, on‑premises network layer.
Develop and enforce model controls and guardrails to ensure compliance and audit readiness.
Assist in transitioning PoC models into production within the bank’s enterprise systems.
Support chatbot development initiatives and contribute to future conversational AI projects.
Work closely with internal stakeholders to ensure solutions align with fraud detection goals and enterprise standards.
Required Qualifications
Experience:
1–2 years of hands‑on experience deploying GenAI or ML‑based products into production.
Background in traditional machine learning with a transition into generative AI.
Technical Skills:
Proficiency in Python and ML libraries such as Scikit‑learn (primary), NumPy, Pandas, Matplotlib, TensorFlow, or PyTorch.
Experience working with LLMs such as LLaMA‑3, LLaMA‑4, GPT‑4, or GPT‑5.
Familiarity with GenAI frameworks and tools like LangChain, LangGraph, Lanbase, or LlamaIndex.
Understanding of Retrieval‑Augmented Generation (RAG) techniques.
Exposure to OpenAI APIs or similar LLM providers.
Other Requirements:
Must be comfortable working in an on‑premises environment (no cloud tools).
Experience with chatbot development is a strong plus.
Ability to work in a highly regulated, security‑conscious enterprise setting.
Preferred Attributes
Strong problem‑solving skills and ability to work independently on PoC development.
Excellent communication and collaboration skills.
Familiarity with enterprise AI governance and audit processes.
Please share the profiles at or call me at
.
#J-18808-Ljbffr