Rippling
Rippling gives businesses one place to run HR, IT, and Finance. It brings together all of the workforce systems that are normally scattered across a company, like payroll, expenses, benefits, and computers. For the first time ever, you can manage and automate every part of the employee lifecycle in a single system.
Take onboarding, for example. With Rippling, you can hire a new employee anywhere in the world and set up their payroll, corporate card, computer, benefits, and even third‑party apps like Slack and Microsoft 365—all within 90 seconds.
Based in San Francisco, CA, Rippling has raised $1.4B+ from the world’s top investors—including Kleiner Perkins, Founders Fund, Sequoia, Greenoaks, and Bedrock—and was named one of America’s best startup employers by Forbes.
We prioritize candidate safety. Please be aware that all official communication will only be sent from @ Rippling.com addresses.
About the Team The Growth Engineering team builds world‑class products, data infrastructure, and AI systems powering Rippling’s market intelligence and GTM operations. The team works cross‑functionally with sales, marketing, Applied AI, and data engineering teams to design systems that amplify Rippling’s high‑performance GTM engine— from recommendation models and enrichment pipelines to AI‑driven workflows and proprietary data funnels.
We operate on a modern Growth Services infrastructure built on FastAPI, Kubernetes, Databricks, Kafka, Snowflake, PostgreSQL, and OpenAI APIs, enabling scalable experimentation and fast iteration.
About the Role We’re seeking a Staff AI/ML Engineer to architect and lead development of production‑grade AI systems, including recommendation engines, multi‑LLM architectures, and ML pipelines. You’ll be responsible for designing systems that combine real‑time data processing, ML/LLM Ops, and intelligent orchestration across Rippling’s Growth Infrastructure.
This is a hands‑on engineering leadership role — you’ll own the technical strategy for AI/ML within Growth Engineering, mentor engineers, and solve some of the most complex challenges in production AI systems with immediate business impact.
What you will do
Architect, build, and optimize recommendation engines, personalization systems, and classification models for GTM automation
Design and implement multi‑LLM architectures combining OpenAI, Claude, and Databricks models for intelligent decisioning and reasoning
Build, train, and evaluate models
Deploy and serve models using FastAPI, Kubernetes, and async microservices, with observability built in
Develop MLOps workflows for fine‑tuning, retraining, model versioning, and automated evaluation
Design medallion data architectures (Bronze/Silver/Gold) using Databricks Delta Live Tables and CDC patterns
Build real‑time and batch data pipelines leveraging Kafka and Databricks for high‑volume model inputs
Develop and maintain embedding systems and matrix factorization‑based recommendation frameworks for personalization and ranking
Implement AI data quality and monitoring frameworks to ensure reliability and trust in model outputs
AI Reliability, Observability & Optimization
Implement AI observability (LangSmith, Braintrust) to track performance, bias, and drift
Build fallback and routing systems for multi‑model deployments
Optimize cost and latency through batching, caching, and adaptive model selection
Technical Leadership & Collaboration
Lead design reviews and guide architecture for AI/ML‑driven systems
Mentor engineers on LLM integration, MLOps, and recommendation systems
Collaborate closely with product and GTM partners to translate business goals into AI‑driven automation
What you will need
7+ years of software engineering experience, including 3+ years building production ML systems.
Expertise in recommendation engines, matrix factorization, and personalization models.
Deep experience integrating LLMs (OpenAI, Claude, etc.) into production applications.
Hands‑on experience training, evaluating, and deploying models in Databricks notebooks and Spark pipelines.
Experience with MLOps tooling for off‑the‑shelf models like XGBoost, CatBoost, or LightGBM.
Proven ability to architect scalable AI systems and lead end‑to‑end deployment.
Preferred Skills
Familiarity with LangChain, LangSmith, and vector databases.
Deep understanding of multi‑LLM coordination patterns, dynamic prompt routing, and evaluation loops.
Experience implementing AI safety, guardrails, and interpretability frameworks.
Experience deploying containerized AI services on Kubernetes
Solid understanding of feature stores, experiment tracking, and online/offline evaluation
Additional Information Rippling is an equal opportunity employer. We are committed to building a diverse and inclusive workforce and do not discriminate based on race, religion, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex, gender, gender identity, gender expression, age, sexual orientation, veteran or military status, or any other legally protected characteristics. Rippling is committed to providing reasonable accommodations for candidates with disabilities who need assistance during the hiring process. To request a reasonable accommodation, please email accommodations@rippling.com
Rippling highly values having employees working in‑office to foster a collaborative work environment and company culture. For office‑based employees (employees who live within a defined radius of a Rippling office), Rippling considers working in the office, at least three days a week under current policy, to be an essential function of the employee's role.
This role will receive a competitive salary + benefits + equity. The salary for US‑based employees will be aligned with one of the ranges below based on location; see which tier applies to your location here .
A variety of factors are considered when determining someone’s compensation — including a candidate’s professional background, experience, and location. Final offer amounts may vary from the amounts listed below.
The pay range for this role is:
180,000 – 315,000 USD per year (US Tier 1)
#J-18808-Ljbffr
Take onboarding, for example. With Rippling, you can hire a new employee anywhere in the world and set up their payroll, corporate card, computer, benefits, and even third‑party apps like Slack and Microsoft 365—all within 90 seconds.
Based in San Francisco, CA, Rippling has raised $1.4B+ from the world’s top investors—including Kleiner Perkins, Founders Fund, Sequoia, Greenoaks, and Bedrock—and was named one of America’s best startup employers by Forbes.
We prioritize candidate safety. Please be aware that all official communication will only be sent from @ Rippling.com addresses.
About the Team The Growth Engineering team builds world‑class products, data infrastructure, and AI systems powering Rippling’s market intelligence and GTM operations. The team works cross‑functionally with sales, marketing, Applied AI, and data engineering teams to design systems that amplify Rippling’s high‑performance GTM engine— from recommendation models and enrichment pipelines to AI‑driven workflows and proprietary data funnels.
We operate on a modern Growth Services infrastructure built on FastAPI, Kubernetes, Databricks, Kafka, Snowflake, PostgreSQL, and OpenAI APIs, enabling scalable experimentation and fast iteration.
About the Role We’re seeking a Staff AI/ML Engineer to architect and lead development of production‑grade AI systems, including recommendation engines, multi‑LLM architectures, and ML pipelines. You’ll be responsible for designing systems that combine real‑time data processing, ML/LLM Ops, and intelligent orchestration across Rippling’s Growth Infrastructure.
This is a hands‑on engineering leadership role — you’ll own the technical strategy for AI/ML within Growth Engineering, mentor engineers, and solve some of the most complex challenges in production AI systems with immediate business impact.
What you will do
Architect, build, and optimize recommendation engines, personalization systems, and classification models for GTM automation
Design and implement multi‑LLM architectures combining OpenAI, Claude, and Databricks models for intelligent decisioning and reasoning
Build, train, and evaluate models
Deploy and serve models using FastAPI, Kubernetes, and async microservices, with observability built in
Develop MLOps workflows for fine‑tuning, retraining, model versioning, and automated evaluation
Design medallion data architectures (Bronze/Silver/Gold) using Databricks Delta Live Tables and CDC patterns
Build real‑time and batch data pipelines leveraging Kafka and Databricks for high‑volume model inputs
Develop and maintain embedding systems and matrix factorization‑based recommendation frameworks for personalization and ranking
Implement AI data quality and monitoring frameworks to ensure reliability and trust in model outputs
AI Reliability, Observability & Optimization
Implement AI observability (LangSmith, Braintrust) to track performance, bias, and drift
Build fallback and routing systems for multi‑model deployments
Optimize cost and latency through batching, caching, and adaptive model selection
Technical Leadership & Collaboration
Lead design reviews and guide architecture for AI/ML‑driven systems
Mentor engineers on LLM integration, MLOps, and recommendation systems
Collaborate closely with product and GTM partners to translate business goals into AI‑driven automation
What you will need
7+ years of software engineering experience, including 3+ years building production ML systems.
Expertise in recommendation engines, matrix factorization, and personalization models.
Deep experience integrating LLMs (OpenAI, Claude, etc.) into production applications.
Hands‑on experience training, evaluating, and deploying models in Databricks notebooks and Spark pipelines.
Experience with MLOps tooling for off‑the‑shelf models like XGBoost, CatBoost, or LightGBM.
Proven ability to architect scalable AI systems and lead end‑to‑end deployment.
Preferred Skills
Familiarity with LangChain, LangSmith, and vector databases.
Deep understanding of multi‑LLM coordination patterns, dynamic prompt routing, and evaluation loops.
Experience implementing AI safety, guardrails, and interpretability frameworks.
Experience deploying containerized AI services on Kubernetes
Solid understanding of feature stores, experiment tracking, and online/offline evaluation
Additional Information Rippling is an equal opportunity employer. We are committed to building a diverse and inclusive workforce and do not discriminate based on race, religion, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex, gender, gender identity, gender expression, age, sexual orientation, veteran or military status, or any other legally protected characteristics. Rippling is committed to providing reasonable accommodations for candidates with disabilities who need assistance during the hiring process. To request a reasonable accommodation, please email accommodations@rippling.com
Rippling highly values having employees working in‑office to foster a collaborative work environment and company culture. For office‑based employees (employees who live within a defined radius of a Rippling office), Rippling considers working in the office, at least three days a week under current policy, to be an essential function of the employee's role.
This role will receive a competitive salary + benefits + equity. The salary for US‑based employees will be aligned with one of the ranges below based on location; see which tier applies to your location here .
A variety of factors are considered when determining someone’s compensation — including a candidate’s professional background, experience, and location. Final offer amounts may vary from the amounts listed below.
The pay range for this role is:
180,000 – 315,000 USD per year (US Tier 1)
#J-18808-Ljbffr