BaseTen Labs, Inc.
Tech Lead Manager - Model Training
BaseTen Labs, Inc., San Francisco, California, United States, 94199
Overview
Baseten powers inference for the world's most dynamic AI companies, like OpenEvidence, Clay, Mirage, Gamma, Sourcegraph, Writer, Abridge, Bland, and Zed. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. With our recent $150M Series D funding, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction, we’re scaling our team to meet accelerating customer demand.
The Role As a Tech Lead Manager of the Training team at Baseten, you’ll lead a team of engineers building the core systems that power large-scale training and fine-tuning of foundation models. Your team will be responsible for designing scalable, reliable, and efficient infrastructure - covering distributed training frameworks, GPU scheduling, and training pipelines—enabling both Baseten and our customers to train and adapt models at scale. You’ll balance hands-on technical contributions with people management, setting the technical direction while fostering the growth and success of your team. You’ll also play a key role in defining Baseten’s platform roadmap by identifying common infrastructure needs and turning them into reusable, self-serve capabilities.
Responsibilities
Lead, mentor, and grow a team of engineers building Baseten’s training infrastructure
Define and drive the technical strategy for large-scale training systems, with a focus on scalability, reliability, and efficiency
Architect and optimize distributed training pipelines across heterogeneous GPU/accelerator environments
Balance hands-on contributions (system design, code reviews, prototyping) with people leadership and career development
Establish best practices for training workflows, distributed systems design, and high-performance model evaluation
Collaborate with Product and Platform Engineering to translate customer and internal needs into reusable infrastructure and APIs
Develop processes that ensure consistent, reliable, and on-time delivery of high-quality systems
Stay ahead of the curve on advancements in training efficiency (FSDP, ZeRO, parameter-efficient training, hardware-aware scheduling) and bring them into production
Qualifications
Bachelor’s degree in Computer Science, Engineering, or related field, or equivalent experience
5+ years of experience in ML infrastructure, distributed systems, or ML platform engineering, including 2+ years in a tech lead or manager role
Strong expertise in distributed training frameworks and orchestration (FSDP, DDP, ZeRO, Ray, Kubernetes, Slurm, or similar)
Hands-on experience building or scaling training infrastructure for LLMs or other foundation models
Deep understanding of GPU/accelerator hardware utilization, mixed precision training, and scaling efficiency
Proven ability to lead and mentor technical teams while delivering complex infrastructure projects
Excellent communication skills, with the ability to bridge technical depth and business needs
Nice to Have
Experience with multi-tenant, production-grade ML platforms
Familiarity with cluster management, GPU scheduling, or elastic resource scaling
Knowledge of advanced model adaptation techniques (LoRA, QLoRA, RLHF, DPO)
Contributions to open-source distributed training or ML infrastructure projects
Experience building developer-friendly APIs or SDKs for ML workflows
Cloud-native infrastructure experience (AWS, GCP, Azure, containerization, orchestration)
Benefits
Competitive compensation package.
This is a unique opportunity to be part of a rapidly growing startup in one of the most exciting engineering fields of our era.
An inclusive and supportive work culture that fosters learning and growth.
Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.
Apply now
to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.
At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.
#J-18808-Ljbffr
The Role As a Tech Lead Manager of the Training team at Baseten, you’ll lead a team of engineers building the core systems that power large-scale training and fine-tuning of foundation models. Your team will be responsible for designing scalable, reliable, and efficient infrastructure - covering distributed training frameworks, GPU scheduling, and training pipelines—enabling both Baseten and our customers to train and adapt models at scale. You’ll balance hands-on technical contributions with people management, setting the technical direction while fostering the growth and success of your team. You’ll also play a key role in defining Baseten’s platform roadmap by identifying common infrastructure needs and turning them into reusable, self-serve capabilities.
Responsibilities
Lead, mentor, and grow a team of engineers building Baseten’s training infrastructure
Define and drive the technical strategy for large-scale training systems, with a focus on scalability, reliability, and efficiency
Architect and optimize distributed training pipelines across heterogeneous GPU/accelerator environments
Balance hands-on contributions (system design, code reviews, prototyping) with people leadership and career development
Establish best practices for training workflows, distributed systems design, and high-performance model evaluation
Collaborate with Product and Platform Engineering to translate customer and internal needs into reusable infrastructure and APIs
Develop processes that ensure consistent, reliable, and on-time delivery of high-quality systems
Stay ahead of the curve on advancements in training efficiency (FSDP, ZeRO, parameter-efficient training, hardware-aware scheduling) and bring them into production
Qualifications
Bachelor’s degree in Computer Science, Engineering, or related field, or equivalent experience
5+ years of experience in ML infrastructure, distributed systems, or ML platform engineering, including 2+ years in a tech lead or manager role
Strong expertise in distributed training frameworks and orchestration (FSDP, DDP, ZeRO, Ray, Kubernetes, Slurm, or similar)
Hands-on experience building or scaling training infrastructure for LLMs or other foundation models
Deep understanding of GPU/accelerator hardware utilization, mixed precision training, and scaling efficiency
Proven ability to lead and mentor technical teams while delivering complex infrastructure projects
Excellent communication skills, with the ability to bridge technical depth and business needs
Nice to Have
Experience with multi-tenant, production-grade ML platforms
Familiarity with cluster management, GPU scheduling, or elastic resource scaling
Knowledge of advanced model adaptation techniques (LoRA, QLoRA, RLHF, DPO)
Contributions to open-source distributed training or ML infrastructure projects
Experience building developer-friendly APIs or SDKs for ML workflows
Cloud-native infrastructure experience (AWS, GCP, Azure, containerization, orchestration)
Benefits
Competitive compensation package.
This is a unique opportunity to be part of a rapidly growing startup in one of the most exciting engineering fields of our era.
An inclusive and supportive work culture that fosters learning and growth.
Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.
Apply now
to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.
At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.
#J-18808-Ljbffr