Ll Oefentherapie
Invent, implement and deploy state-of-the-art machine learning and/or specific domain industry algorithms and systems. Build prototypes and explore conceptually new solutions. Work collaboratively with science, engineering, and product teams to identify customer needs in order to create and implement solutions, promote innovation and drive model implementations. Applies data science capabilities and research findings to create and implement solutions to scale. Responsible for developing new intelligence around core products and services through applied research on behalf of our customers. Develops models, prototypes, and experiments that pave the way for innovative products and services.
About the Role We are seeking an exceptional Principal Applied Scientist with deep expertise in Responsible AI to join our fast‑growing AI/ML research team. In this role, you will drive the development and evaluation of scalable safeguards for foundation models, with a focus on large language or multi‑modal models (LLMs/ LMMs). Your work will directly influence how we design, deploy, and monitor trustworthy AI systems across a broad range of products.
What You’ll Do
Conduct cutting‑edge research and development in Responsible AI, including fairness, robustness, explainability, and safety for generative models
Design and implement safeguards, red teaming pipelines, and bias mitigation strategies for LLMs and other foundation models
Contribute to the fine‑tuning and alignment of LLMs using techniques such as prompt engineering, instruction tuning, and RLHF/DPO.
Define and implement rigorous evaluation protocols (e.g., bias audits, toxicity analysis, robustness benchmarks)
Collaborate cross‑functionally with product, policy, legal, and engineering teams to ensure Responsible AI principles are embedded throughout the model lifecycle
aangek
Publish in top‑tier venues (e.g., NeurIPS, ICML, ICLR, ACL, CVPR) and represent the company in academic and industry forums
Minimum Qualifications
Ph.D骑 in Computer Science, Machine Learning, NLP, or a related field, with publications in top‑tier AI/ML conferences or journals
Hands‑on experience with LLMs including fine‑tuning, evaluation, and prompt engineering
Demonstrated expertise in building or evaluating Responsible AI systems (e.g., fairness, safety, interpretability)
Proficiency in Python and ML/DL frameworks such as PyTorch or Tensorrapped
Strong understanding of model бараva evaluation techniques and metrics related to bias, robustness, and toxicity
Creative problem‑solving skills with a rapid prototyping mindset and a collaborative attitude
Preferred Qualifications (Nice to Have)
Experience with RLHF ( Query
which) or other alignment methods
Open‑source contributions in the AI/ML community
Experience working with model guardrails, safety filters, or content moderation systems
Why Join Us \ Winnipeg
You’ll be working at the intersection of AI innovation and Responsible AI, helping shape the next generation of safe and trustworthy machine learning systems. If you’re passionate about ensuring AI benefits everyone—and you have the technical depth to back it up—we want to hear from you.
Business Justification This role is critical to advancing our AI strategy by accelerating the development and deployment of
scalable, production‑grade machine littering solutions
that directly support customer use cases and drive differentiation in our core products. By combining
cutting‑edge research with applied innovation , this role will enable us to:
Address
high‑priority technical gaps
in areas such as model safety, bias mitigation, explainability, and performance optimization
Increase customer trust and adoption through more robust and responsible AI systems
đều
Strengthen our IP portfolio and thought leadership through
novel model development and peer‑reviewed publications
This position not only supports innovation but
directly impacts revenue, customer satisfaction, and competitiveుఖ advantage
by ensuring our AI systems are performant, responsible, and aligned with user needs.
#J-18808-Ljbffr
About the Role We are seeking an exceptional Principal Applied Scientist with deep expertise in Responsible AI to join our fast‑growing AI/ML research team. In this role, you will drive the development and evaluation of scalable safeguards for foundation models, with a focus on large language or multi‑modal models (LLMs/ LMMs). Your work will directly influence how we design, deploy, and monitor trustworthy AI systems across a broad range of products.
What You’ll Do
Conduct cutting‑edge research and development in Responsible AI, including fairness, robustness, explainability, and safety for generative models
Design and implement safeguards, red teaming pipelines, and bias mitigation strategies for LLMs and other foundation models
Contribute to the fine‑tuning and alignment of LLMs using techniques such as prompt engineering, instruction tuning, and RLHF/DPO.
Define and implement rigorous evaluation protocols (e.g., bias audits, toxicity analysis, robustness benchmarks)
Collaborate cross‑functionally with product, policy, legal, and engineering teams to ensure Responsible AI principles are embedded throughout the model lifecycle
aangek
Publish in top‑tier venues (e.g., NeurIPS, ICML, ICLR, ACL, CVPR) and represent the company in academic and industry forums
Minimum Qualifications
Ph.D骑 in Computer Science, Machine Learning, NLP, or a related field, with publications in top‑tier AI/ML conferences or journals
Hands‑on experience with LLMs including fine‑tuning, evaluation, and prompt engineering
Demonstrated expertise in building or evaluating Responsible AI systems (e.g., fairness, safety, interpretability)
Proficiency in Python and ML/DL frameworks such as PyTorch or Tensorrapped
Strong understanding of model бараva evaluation techniques and metrics related to bias, robustness, and toxicity
Creative problem‑solving skills with a rapid prototyping mindset and a collaborative attitude
Preferred Qualifications (Nice to Have)
Experience with RLHF ( Query
which) or other alignment methods
Open‑source contributions in the AI/ML community
Experience working with model guardrails, safety filters, or content moderation systems
Why Join Us \ Winnipeg
You’ll be working at the intersection of AI innovation and Responsible AI, helping shape the next generation of safe and trustworthy machine learning systems. If you’re passionate about ensuring AI benefits everyone—and you have the technical depth to back it up—we want to hear from you.
Business Justification This role is critical to advancing our AI strategy by accelerating the development and deployment of
scalable, production‑grade machine littering solutions
that directly support customer use cases and drive differentiation in our core products. By combining
cutting‑edge research with applied innovation , this role will enable us to:
Address
high‑priority technical gaps
in areas such as model safety, bias mitigation, explainability, and performance optimization
Increase customer trust and adoption through more robust and responsible AI systems
đều
Strengthen our IP portfolio and thought leadership through
novel model development and peer‑reviewed publications
This position not only supports innovation but
directly impacts revenue, customer satisfaction, and competitiveుఖ advantage
by ensuring our AI systems are performant, responsible, and aligned with user needs.
#J-18808-Ljbffr