Acceler8 Talent
Solutions & Customer Success Lead — AI Inference / Systems
What if your next role put you at the center of deploying, optimizing, and scaling AI inference systems across real production hardware? Our client—a high-revenue AI infrastructure startup redefining how agentic AI runs across CPUs, GPUs, and accelerators—is hiring a Solutions & Customer Success Lead (San Francisco, Onsite). They’re building the inference layer for next‑generation AI systems, spanning compilers, orchestration, kernel generation, and hardware‑aware scheduling—already deployed at Fortune 500 companies. The role:
Own the full customer lifecycle across inference‑heavy enterprise deployments Lead on‑prem POCs: setup, tuning, profiling, and performance troubleshooting Work hands‑on with customers implementing models, pipelines, and orchestration logic Support large‑scale inference workloads across hybrid cloud + edge environments Diagnose latency, throughput, and utilization issues across hardware stacks Act as the primary technical interface between customers, sales, and core infra teams Real revenue, real customers, production inference at scale Deep exposure to compilers, kernels, orchestration, and AI systems Elite, low‑ego, deeply technical founding team Seniority level
Mid‑Senior level Employment type
Full‑time Job function
Engineering and Sales Industries
Software Development and Computer Hardware Manufacturing
#J-18808-Ljbffr
What if your next role put you at the center of deploying, optimizing, and scaling AI inference systems across real production hardware? Our client—a high-revenue AI infrastructure startup redefining how agentic AI runs across CPUs, GPUs, and accelerators—is hiring a Solutions & Customer Success Lead (San Francisco, Onsite). They’re building the inference layer for next‑generation AI systems, spanning compilers, orchestration, kernel generation, and hardware‑aware scheduling—already deployed at Fortune 500 companies. The role:
Own the full customer lifecycle across inference‑heavy enterprise deployments Lead on‑prem POCs: setup, tuning, profiling, and performance troubleshooting Work hands‑on with customers implementing models, pipelines, and orchestration logic Support large‑scale inference workloads across hybrid cloud + edge environments Diagnose latency, throughput, and utilization issues across hardware stacks Act as the primary technical interface between customers, sales, and core infra teams Real revenue, real customers, production inference at scale Deep exposure to compilers, kernels, orchestration, and AI systems Elite, low‑ego, deeply technical founding team Seniority level
Mid‑Senior level Employment type
Full‑time Job function
Engineering and Sales Industries
Software Development and Computer Hardware Manufacturing
#J-18808-Ljbffr