Baseten
Engineering Manager, AI Inference Infrastructure
Baseten, San Francisco, California, United States, 94199
Engineering Manager, AI Inference Infrastructure
Join to apply for the
Engineering Manager, AI Inference Infrastructure
role at
Baseten
Get AI-powered advice on this job and more exclusive features.
This range is provided by Baseten. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more.
Base pay range $150,000.00/yr - $225,000.00/yr
About Baseten Baseten powers inference for the world's most dynamic AI companies, like OpenEvidence, Clay, Mirage, Gamma, Sourcegraph, Writer, Abridge, Bland, and Zed. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting‑edge models into production. With our recent $150M Series D funding, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction, we’re scaling our team to meet accelerating customer demand.
Engineering Manager, AI Inference Infrastructure The Role As an Engineering Manager (Player & Coach) focused on AI Inference Infrastructure, you’ll lead a team responsible for the performance, reliability, and success of large‑scale ML workloads in production. Applying both hands‑on technical ownership and managerial leadership, you will guide your team through complex incidents while improving observability and operational practices and shaping how we deliver world‑class AI infrastructure support to our customers. While you will actively coach and grow your team, you’ll also stay close to the technology including diving into runtime debugging, optimizing GPU utilization, and helping evolve the Baseten platform based on real‑world patterns and customer feedback.
Responsibilities
Lead, mentor, and scale a team of Support Engineers specializing in AI and ML production environments, fostering technical depth, accountability, and a customer‑first mindset.
Serve as a player‑coach, directly contributing to complex troubleshooting, inference optimization, and incident resolution for high‑value enterprise customers.
Diagnose and resolve runtime issues impacting model performance, such as latency spikes, memory pressure, GPU scheduling, and concurrency management.
Debug Kubernetes infrastructure (pods, controllers, networking) and observability stacks using tools like Grafana, Loki, and Prometheus.
Own critical incidents end‑to‑end — coordinating across Engineering, Product, and Sales to ensure timely resolution, transparent communication, and SLA compliance.
Drive continuous improvement by enhancing diagnostic runbooks, refining alerting strategies, and developing internal automation for faster root‑cause analysis.
Collaborate with product and platform teams to surface insights from production issues — shaping roadmap priorities around reliability, inference efficiency, and operational scalability.
Lead initiatives that enhance observability, monitoring, and alerting for AI workloads across distributed compute environments.
Balance tactical execution with strategic vision, ensuring your team not only resolves today’s issues but also builds systems that prevent tomorrow’s.
Requirements
Proven experience leading or mentoring technical teams in Support Engineering, Infrastructure, or Site Reliability within production AI/ML or distributed systems environments.
Deep Kubernetes troubleshooting expertise, including advanced resource debugging, runtime performance analysis, and observability‑driven diagnostics.
Hands‑on experience managing distributed systems or AI products at scale — optimizing GPU/CPU utilization, batch sizing, concurrency, and memory efficiency.
Expertise with observability and monitoring tools (Grafana, Prometheus, Loki) and alerting best practices.
Skilled in incident management and customer escalation handling, with a proven ability to drive clarity and confidence in high‑stakes situations.
Demonstrated project management and organizational skills, capable of orchestrating multi‑stakeholder efforts from incident triage through resolution and RCA.
Bonus / Nice‑to‑Have
Experience implementing or managing incident‑response and ticketing systems (e.g., Zendesk, Pylon).
Benefits
Competitive compensation, including meaningful equity.
100% coverage of medical, dental, and vision insurance for employee and dependents.
Generous PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!).
Paid parental leave.
Company‑facilitated 401(k).
Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.
Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward‑thinking team, we would love to hear from you.
At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.
Compensation Range: $150K - $225K
#J-18808-Ljbffr
Engineering Manager, AI Inference Infrastructure
role at
Baseten
Get AI-powered advice on this job and more exclusive features.
This range is provided by Baseten. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more.
Base pay range $150,000.00/yr - $225,000.00/yr
About Baseten Baseten powers inference for the world's most dynamic AI companies, like OpenEvidence, Clay, Mirage, Gamma, Sourcegraph, Writer, Abridge, Bland, and Zed. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting‑edge models into production. With our recent $150M Series D funding, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction, we’re scaling our team to meet accelerating customer demand.
Engineering Manager, AI Inference Infrastructure The Role As an Engineering Manager (Player & Coach) focused on AI Inference Infrastructure, you’ll lead a team responsible for the performance, reliability, and success of large‑scale ML workloads in production. Applying both hands‑on technical ownership and managerial leadership, you will guide your team through complex incidents while improving observability and operational practices and shaping how we deliver world‑class AI infrastructure support to our customers. While you will actively coach and grow your team, you’ll also stay close to the technology including diving into runtime debugging, optimizing GPU utilization, and helping evolve the Baseten platform based on real‑world patterns and customer feedback.
Responsibilities
Lead, mentor, and scale a team of Support Engineers specializing in AI and ML production environments, fostering technical depth, accountability, and a customer‑first mindset.
Serve as a player‑coach, directly contributing to complex troubleshooting, inference optimization, and incident resolution for high‑value enterprise customers.
Diagnose and resolve runtime issues impacting model performance, such as latency spikes, memory pressure, GPU scheduling, and concurrency management.
Debug Kubernetes infrastructure (pods, controllers, networking) and observability stacks using tools like Grafana, Loki, and Prometheus.
Own critical incidents end‑to‑end — coordinating across Engineering, Product, and Sales to ensure timely resolution, transparent communication, and SLA compliance.
Drive continuous improvement by enhancing diagnostic runbooks, refining alerting strategies, and developing internal automation for faster root‑cause analysis.
Collaborate with product and platform teams to surface insights from production issues — shaping roadmap priorities around reliability, inference efficiency, and operational scalability.
Lead initiatives that enhance observability, monitoring, and alerting for AI workloads across distributed compute environments.
Balance tactical execution with strategic vision, ensuring your team not only resolves today’s issues but also builds systems that prevent tomorrow’s.
Requirements
Proven experience leading or mentoring technical teams in Support Engineering, Infrastructure, or Site Reliability within production AI/ML or distributed systems environments.
Deep Kubernetes troubleshooting expertise, including advanced resource debugging, runtime performance analysis, and observability‑driven diagnostics.
Hands‑on experience managing distributed systems or AI products at scale — optimizing GPU/CPU utilization, batch sizing, concurrency, and memory efficiency.
Expertise with observability and monitoring tools (Grafana, Prometheus, Loki) and alerting best practices.
Skilled in incident management and customer escalation handling, with a proven ability to drive clarity and confidence in high‑stakes situations.
Demonstrated project management and organizational skills, capable of orchestrating multi‑stakeholder efforts from incident triage through resolution and RCA.
Bonus / Nice‑to‑Have
Experience implementing or managing incident‑response and ticketing systems (e.g., Zendesk, Pylon).
Benefits
Competitive compensation, including meaningful equity.
100% coverage of medical, dental, and vision insurance for employee and dependents.
Generous PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!).
Paid parental leave.
Company‑facilitated 401(k).
Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.
Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward‑thinking team, we would love to hear from you.
At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.
Compensation Range: $150K - $225K
#J-18808-Ljbffr