Inflection AI
Member of Technical Staff - Model Training
Inflection AI, Palo Alto, California, United States, 94306
At Inflection AI, our public benefit mission is to harness the power of AI to improve human well-being and productivity.
The next era of AI will be defined by agents we trust to act on our behalf.
We're pioneering this future with human-centered AI models that unite emotional intelligence (EQ) and raw intelligence (IQ)-transforming interactions from transactional to relational, to create enduring value for individuals and enterprises alike.
Our work comes to life in two ways today:
Pi, your personal AI, designed to be a kind and supportive companion that elevates everyday life with practical assistance and perspectives.
Platform - large-language models (LLMs) and APIs that enable builders, agents, and enterprises to bring Pi-class emotional intelligence into experiences where empathy and human understanding matter most.
We are building toward a future of AI agents that earn trust, deepen understanding, and create aligned, long-term value for all.
About the Role
As a Model Training engineer, you will design, build, and scale the post-training pipelines that turn a general LLM into a brand-fluent, production-ready assistant. Your innovations in fine-tuning and preference optimization (RLHF, DPO, GRPO, RLAIF) will directly improve reliability, alignment, and cost.
This is a good role for you if you: Have hands-on experience training and fine-tuning large transformer models on multi-GPU / multi-node clusters. Are fluent in PyTorch and its ecosystem tools (Torchtune, FSDP, DeepSpeed) and enjoy digging into distributed-training internals, mixed precision, and memory-efficiency tricks. Have shipped or published work in RLHF, DPO, GRPO, or RLAIF and understand their practical trade-offs. Care deeply about training tools, pipelines, and reproducibility-you automate the boring parts so you can iterate on the fun parts. Balance research curiosity with product pragmatism-you know when to run an ablation and when to ship. Communicate crisply with both technical and non-technical teammates. Responsibilities include:
Contribute to end-to-end post-training workflows-dataset curation, hyper-parameter search, evaluation, and rollout-using PyTorch, Torchtune, FSDP/DeepSpeed, and our internal orchestration stack. Prototype and compare alignment techniques (e.g., curriculum RL, multi-objective reward modeling, tool-use fine-tuning) and push the best ideas into production. Automate training at scale: build robust pipeline components, tools, scripts, and dashboards so experiments are reproducible and easy to trace. Define the metrics that matter; run A/B tests and iterate quickly to meet aggressive quality targets. Collaborate with inference, safety, and product teams to land improvements in customer-facing systems. Employee Pay Disclosures
At Inflection AI, we aim to attract and retain the best employees and compensate them in a way that appropriately and fairly values their individual contributions to the company. For this role, Inflection AI estimates a starting annual base salary will fall in the range of approximately $175,000 - $350,000 depending on experience. This estimate can vary based on the factors described above, so the actual starting annual base salary may be above or below this range.
Interview Process
Apply: Please apply on Linkedin or our website for a specific role.
After speaking with one of our recruiters, you'll enter our structured interview process, which includes the following stages: Hiring Manager Conversation
- An initial discussion with the hiring manager to assess fit and alignment. Technical Interview
- A deep dive with an Inflection Engineer to evaluate your technical expertise. Onsite Interview
- A comprehensive assessment, including: A
domain-specific interview A
system design interview A final conversation with the
hiring manager
Depending on the role, we may also ask you to complete a take-home exercise or deliver a presentation.
For
non-technical roles , be prepared for a role-specific interview, such as a portfolio review.
Decision Timeline We aim to provide feedback within one week of your final interview.
The next era of AI will be defined by agents we trust to act on our behalf.
We're pioneering this future with human-centered AI models that unite emotional intelligence (EQ) and raw intelligence (IQ)-transforming interactions from transactional to relational, to create enduring value for individuals and enterprises alike.
Our work comes to life in two ways today:
Pi, your personal AI, designed to be a kind and supportive companion that elevates everyday life with practical assistance and perspectives.
Platform - large-language models (LLMs) and APIs that enable builders, agents, and enterprises to bring Pi-class emotional intelligence into experiences where empathy and human understanding matter most.
We are building toward a future of AI agents that earn trust, deepen understanding, and create aligned, long-term value for all.
About the Role
As a Model Training engineer, you will design, build, and scale the post-training pipelines that turn a general LLM into a brand-fluent, production-ready assistant. Your innovations in fine-tuning and preference optimization (RLHF, DPO, GRPO, RLAIF) will directly improve reliability, alignment, and cost.
This is a good role for you if you: Have hands-on experience training and fine-tuning large transformer models on multi-GPU / multi-node clusters. Are fluent in PyTorch and its ecosystem tools (Torchtune, FSDP, DeepSpeed) and enjoy digging into distributed-training internals, mixed precision, and memory-efficiency tricks. Have shipped or published work in RLHF, DPO, GRPO, or RLAIF and understand their practical trade-offs. Care deeply about training tools, pipelines, and reproducibility-you automate the boring parts so you can iterate on the fun parts. Balance research curiosity with product pragmatism-you know when to run an ablation and when to ship. Communicate crisply with both technical and non-technical teammates. Responsibilities include:
Contribute to end-to-end post-training workflows-dataset curation, hyper-parameter search, evaluation, and rollout-using PyTorch, Torchtune, FSDP/DeepSpeed, and our internal orchestration stack. Prototype and compare alignment techniques (e.g., curriculum RL, multi-objective reward modeling, tool-use fine-tuning) and push the best ideas into production. Automate training at scale: build robust pipeline components, tools, scripts, and dashboards so experiments are reproducible and easy to trace. Define the metrics that matter; run A/B tests and iterate quickly to meet aggressive quality targets. Collaborate with inference, safety, and product teams to land improvements in customer-facing systems. Employee Pay Disclosures
At Inflection AI, we aim to attract and retain the best employees and compensate them in a way that appropriately and fairly values their individual contributions to the company. For this role, Inflection AI estimates a starting annual base salary will fall in the range of approximately $175,000 - $350,000 depending on experience. This estimate can vary based on the factors described above, so the actual starting annual base salary may be above or below this range.
Interview Process
Apply: Please apply on Linkedin or our website for a specific role.
After speaking with one of our recruiters, you'll enter our structured interview process, which includes the following stages: Hiring Manager Conversation
- An initial discussion with the hiring manager to assess fit and alignment. Technical Interview
- A deep dive with an Inflection Engineer to evaluate your technical expertise. Onsite Interview
- A comprehensive assessment, including: A
domain-specific interview A
system design interview A final conversation with the
hiring manager
Depending on the role, we may also ask you to complete a take-home exercise or deliver a presentation.
For
non-technical roles , be prepared for a role-specific interview, such as a portfolio review.
Decision Timeline We aim to provide feedback within one week of your final interview.