Oracle
About the Role
Principal Applied Scientist
at
Oracle
— we are seeking an exceptional expert in Responsible AI to join our fast-growing AI/ML research team. In this role, you will drive the development and evaluation of scalable safeguards for foundation models, with a focus on large language or multi-modal models (LLMs/LMMs). Your work will influence how we design, deploy, and monitor trustworthy AI systems across a broad range of products. What You’ll Do
Conduct cutting-edge research and development in Responsible AI, including fairness, robustness, explainability, and safety for generative models Design and implement safeguards, red teaming pipelines, and bias mitigation strategies for LLMs and other foundation models Contribute to the fine-tuning and alignment of LLMs using techniques such as prompt engineering, instruction tuning, and RLHF/DPO Define and implement rigorous evaluation protocols (e.g., bias audits, toxicity analysis, robustness benchmarks) Collaborate cross-functionally with product, policy, legal, and engineering teams to embed Responsible AI principles throughout the model lifecycle Publish in top-tier venues (e.g., NeurIPS, ICML, ICLR, ACL, CVPR) and represent the company in academic and industry forums Minimum Qualifications
Ph.D. in Computer Science, Machine Learning, NLP, or a related field, with publications in top-tier AI/ML conferences or journals Hands-on experience with LLMs including fine-tuning, evaluation, and prompt engineering Demonstrated expertise in building or evaluating Responsible AI systems (e.g., fairness, safety, interpretability) Proficiency in Python and ML/DL frameworks such as PyTorch or TensorFlow Strong understanding of model evaluation techniques and metrics related to bias, robustness, and toxicity Creative problem-solving skills with a rapid prototyping mindset and a collaborative attitude Preferred Qualifications
Experience with RLHF (Reinforcement Learning from Human Feedback) or other alignment methods Open-source contributions in the AI/ML community Experience working with model guardrails, safety filters, or content moderation systems Why Join Us
You’ll be working at the intersection of AI innovation and Responsible AI, helping shape the next generation of safe and trustworthy machine learning systems. If you’re passionate about ensuring AI benefits everyone—and you have the technical depth to back it up—we want to hear from you. Disclaimer and Benefits
Disclaimer:
Certain US customer or client-facing roles may be required to comply with applicable requirements, such as immunization and occupational health mandates. Range and benefit information
provided in this posting are specific to the stated locations only. US: Hiring Range in USD from:
$120,100
-
$251,600
per year. May be eligible for bonus, equity, and compensation deferral. Oracle maintains broad salary ranges for its roles to account for variations in knowledge, skills, experience, market conditions and locations. Candidates are typically placed into the range based on the factors above as well as internal peer equity. Oracle US offers a comprehensive benefits package including: Medical, dental, and vision insurance Short term and long term disability Life insurance and AD&D Flexible spending accounts and commuter benefits 401(k) with company match Paid time off and holidays Paid parental leave and adoption assistance Employee stock purchase plan Financial planning and group legal services The role will generally accept applications for at least three calendar days from the posting date or as long as the job remains posted. About Us
As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’re committed to growing an inclusive workforce and providing opportunities for all. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability, or protected veteran status.
#J-18808-Ljbffr
Principal Applied Scientist
at
Oracle
— we are seeking an exceptional expert in Responsible AI to join our fast-growing AI/ML research team. In this role, you will drive the development and evaluation of scalable safeguards for foundation models, with a focus on large language or multi-modal models (LLMs/LMMs). Your work will influence how we design, deploy, and monitor trustworthy AI systems across a broad range of products. What You’ll Do
Conduct cutting-edge research and development in Responsible AI, including fairness, robustness, explainability, and safety for generative models Design and implement safeguards, red teaming pipelines, and bias mitigation strategies for LLMs and other foundation models Contribute to the fine-tuning and alignment of LLMs using techniques such as prompt engineering, instruction tuning, and RLHF/DPO Define and implement rigorous evaluation protocols (e.g., bias audits, toxicity analysis, robustness benchmarks) Collaborate cross-functionally with product, policy, legal, and engineering teams to embed Responsible AI principles throughout the model lifecycle Publish in top-tier venues (e.g., NeurIPS, ICML, ICLR, ACL, CVPR) and represent the company in academic and industry forums Minimum Qualifications
Ph.D. in Computer Science, Machine Learning, NLP, or a related field, with publications in top-tier AI/ML conferences or journals Hands-on experience with LLMs including fine-tuning, evaluation, and prompt engineering Demonstrated expertise in building or evaluating Responsible AI systems (e.g., fairness, safety, interpretability) Proficiency in Python and ML/DL frameworks such as PyTorch or TensorFlow Strong understanding of model evaluation techniques and metrics related to bias, robustness, and toxicity Creative problem-solving skills with a rapid prototyping mindset and a collaborative attitude Preferred Qualifications
Experience with RLHF (Reinforcement Learning from Human Feedback) or other alignment methods Open-source contributions in the AI/ML community Experience working with model guardrails, safety filters, or content moderation systems Why Join Us
You’ll be working at the intersection of AI innovation and Responsible AI, helping shape the next generation of safe and trustworthy machine learning systems. If you’re passionate about ensuring AI benefits everyone—and you have the technical depth to back it up—we want to hear from you. Disclaimer and Benefits
Disclaimer:
Certain US customer or client-facing roles may be required to comply with applicable requirements, such as immunization and occupational health mandates. Range and benefit information
provided in this posting are specific to the stated locations only. US: Hiring Range in USD from:
$120,100
-
$251,600
per year. May be eligible for bonus, equity, and compensation deferral. Oracle maintains broad salary ranges for its roles to account for variations in knowledge, skills, experience, market conditions and locations. Candidates are typically placed into the range based on the factors above as well as internal peer equity. Oracle US offers a comprehensive benefits package including: Medical, dental, and vision insurance Short term and long term disability Life insurance and AD&D Flexible spending accounts and commuter benefits 401(k) with company match Paid time off and holidays Paid parental leave and adoption assistance Employee stock purchase plan Financial planning and group legal services The role will generally accept applications for at least three calendar days from the posting date or as long as the job remains posted. About Us
As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’re committed to growing an inclusive workforce and providing opportunities for all. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability, or protected veteran status.
#J-18808-Ljbffr