AI Fund
Engineering Manager, AI Inference Infrastructure
AI Fund, San Francisco, California, United States, 94199
Engineering Manager, AI Inference Infrastructure
Apply for the
Engineering Manager, AI Inference Infrastructure
role at
Baseten . About Baseten
Baseten powers inference for the world’s most dynamic AI companies, delivering scalable, reliable, and efficient AI inference infrastructure. The Role
As an Engineering Manager (Player & Coach) focused on AI Inference Infrastructure, you’ll lead a team responsible for the performance, reliability, and success of large‑scale ML workloads in production. You will combine hands‑on technical ownership with managerial leadership, guiding your team through complex incidents while improving observability, operational practices, and shaping how we deliver world‑class AI infrastructure support to our customers. While you actively coach and grow your team, you will also stay close to the technology, diving into runtime debugging, optimizing GPU utilization, and evolving the Baseten platform based on real‑world patterns and customer feedback. Responsibilities
Lead, mentor, and scale a team of Support Engineers specializing in AI and ML production environments, fostering technical depth, accountability, and a customer‑first mindset. Serve as a player‑coach, directly contributing to complex troubleshooting, inference optimization, and incident resolution for high‑value enterprise customers. Diagnose and resolve runtime issues impacting model performance, such as latency spikes, memory pressure, GPU scheduling, and concurrency management. Debug Kubernetes infrastructure (pods, controllers, networking) and observability stacks using tools like Grafana, Loki, and Prometheus. Own critical incidents end‑to‑end — coordinating across Engineering, Product, and Sales to ensure timely resolution, transparent communication, and SLA compliance. Drive continuous improvement by enhancing diagnostic runbooks, refining alerting strategies, and developing internal automation for faster root‑cause analysis. Collaborate with product and platform teams to surface insights from production issues — shaping roadmap priorities around reliability, inference efficiency, and operational scalability. Lead initiatives that enhance observability, monitoring, and alerting for AI workloads across distributed compute environments. Balance tactical execution with strategic vision, ensuring the team not only resolves today’s issues but also builds systems that prevent tomorrow’s. Requirements
Proven experience leading or mentoring technical teams in Support Engineering, Infrastructure, or Site Reliability within production AI/ML or distributed systems environments. Deep Kubernetes troubleshooting expertise, including advanced resource debugging, runtime performance analysis, and observability‑driven diagnostics. Hands‑on experience managing distributed systems or AI products at scale — optimizing GPU/CPU utilization, batch sizing, concurrency, and memory efficiency. Expertise with observability and monitoring tools (Grafana, Prometheus, Loki) and alerting best practices. Skilled in incident management and customer escalation handling, with a proven ability to drive clarity and confidence in high‑stakes situations. Demonstrated project management and organizational skills, capable of orchestrating multi‑stakeholder efforts from incident triage through resolution and RCA. Bonus / Nice‑to‑Have
Experience implementing or managing incident‑response and ticketing systems (e.g., Zendesk, Pylon). Benefits
Competitive compensation, including meaningful equity. 100% coverage of medical, dental, and vision insurance for employee and dependents. Generous PTO policy, including company‑wide Winter Break (off from Christmas Eve to New Year’s Day). Paid parental leave. Company‑facilitated 401(k). Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities. At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.
#J-18808-Ljbffr
Apply for the
Engineering Manager, AI Inference Infrastructure
role at
Baseten . About Baseten
Baseten powers inference for the world’s most dynamic AI companies, delivering scalable, reliable, and efficient AI inference infrastructure. The Role
As an Engineering Manager (Player & Coach) focused on AI Inference Infrastructure, you’ll lead a team responsible for the performance, reliability, and success of large‑scale ML workloads in production. You will combine hands‑on technical ownership with managerial leadership, guiding your team through complex incidents while improving observability, operational practices, and shaping how we deliver world‑class AI infrastructure support to our customers. While you actively coach and grow your team, you will also stay close to the technology, diving into runtime debugging, optimizing GPU utilization, and evolving the Baseten platform based on real‑world patterns and customer feedback. Responsibilities
Lead, mentor, and scale a team of Support Engineers specializing in AI and ML production environments, fostering technical depth, accountability, and a customer‑first mindset. Serve as a player‑coach, directly contributing to complex troubleshooting, inference optimization, and incident resolution for high‑value enterprise customers. Diagnose and resolve runtime issues impacting model performance, such as latency spikes, memory pressure, GPU scheduling, and concurrency management. Debug Kubernetes infrastructure (pods, controllers, networking) and observability stacks using tools like Grafana, Loki, and Prometheus. Own critical incidents end‑to‑end — coordinating across Engineering, Product, and Sales to ensure timely resolution, transparent communication, and SLA compliance. Drive continuous improvement by enhancing diagnostic runbooks, refining alerting strategies, and developing internal automation for faster root‑cause analysis. Collaborate with product and platform teams to surface insights from production issues — shaping roadmap priorities around reliability, inference efficiency, and operational scalability. Lead initiatives that enhance observability, monitoring, and alerting for AI workloads across distributed compute environments. Balance tactical execution with strategic vision, ensuring the team not only resolves today’s issues but also builds systems that prevent tomorrow’s. Requirements
Proven experience leading or mentoring technical teams in Support Engineering, Infrastructure, or Site Reliability within production AI/ML or distributed systems environments. Deep Kubernetes troubleshooting expertise, including advanced resource debugging, runtime performance analysis, and observability‑driven diagnostics. Hands‑on experience managing distributed systems or AI products at scale — optimizing GPU/CPU utilization, batch sizing, concurrency, and memory efficiency. Expertise with observability and monitoring tools (Grafana, Prometheus, Loki) and alerting best practices. Skilled in incident management and customer escalation handling, with a proven ability to drive clarity and confidence in high‑stakes situations. Demonstrated project management and organizational skills, capable of orchestrating multi‑stakeholder efforts from incident triage through resolution and RCA. Bonus / Nice‑to‑Have
Experience implementing or managing incident‑response and ticketing systems (e.g., Zendesk, Pylon). Benefits
Competitive compensation, including meaningful equity. 100% coverage of medical, dental, and vision insurance for employee and dependents. Generous PTO policy, including company‑wide Winter Break (off from Christmas Eve to New Year’s Day). Paid parental leave. Company‑facilitated 401(k). Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities. At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.
#J-18808-Ljbffr