Logo
AI Fund

Software Engineer - Model API's

AI Fund, San Francisco, California, United States, 94199

Save Job

Overview

Software Engineer - Model API's at AI Fund. Baseten powers inference for the world's most dynamic AI companies, focusing on Model API's — the infrastructure powering hosted API endpoints for the latest open-source models. This role is part of Baseten’s Model Performance (MP) team, which ensures models are fast, reliable, and cost-efficient. You will join a small, high-impact team operating at the intersection of product, model performance, and infra, helping to define how developers interact with AI models at scale. Responsibilities

Design, build, and operate the Model APIs surface with focus on advanced inference capabilities: structured outputs (JSON mode, grammar-constrained generation), tool/function calling and multi-modal serving Profile and optimize TensorRT-LLM kernels, analyze CUDA kernel performance, implement custom CUDA operators, tune memory allocation patterns for maximum throughput and optimize communication patterns across multi-GPU setups Productionize performance improvements across runtimes with deep understanding of their internals: speculative decoding implementations, guided generation for structured outputs, custom scheduling and routing algorithms for high-performance serving Build comprehensive benchmarking frameworks that measure real-world performance across different model architectures, batch sizes, sequence lengths, and hardware configurations Productionize performance improvements across runtimes (e.g., TensorRT, TensorRT-LLM): speculative decoding, quantization, batching, and KV-cache reuse Instrument deep observability (metrics, traces, logs) and build repeatable benchmarks to measure speed, reliability, and quality Implement platform fundamentals: API versioning, validation, usage metering, quotas, and authentication Collaborate closely with other teams to deliver robust, developer-friendly model serving experiences Requirements

3+ years experience building and operating distributed systems or large-scale APIs Proven track record of owning low-latency, reliable backend services (rate-limiting, auth, quotas, metering, migrations) Infra instincts with performance sensibilities: profiling, tracing, capacity planning, and SLO management Comfortable debugging complex systems, from runtime internals to GPU execution traces Strong written communication; able to produce clear design docs and collaborate across functions Nice to Have

Experience with LLM runtimes (vLLM, SGLang, TensorRT‑LLM) or contributions to open-source inference engines (vLLM, TensorRT-LLM, SGLang, TGI) Knowledge of Kubernetes, service meshes, API gateways, or distributed scheduling Background in developer-facing infrastructure or open-source APIs Infra-leaning generalists with strong engineering fundamentals and curiosity; ML experience is a plus but not required Benefits

Competitive compensation package Opportunities to be part of a rapidly growing startup in a leading engineering field An inclusive and supportive work culture that fosters learning and growth Exposure to a variety of ML startups, offering learning and networking opportunities About Baseten

Baseten powers inference for the world's most dynamic AI companies, like those leveraging hosted model APIs and open-source models. Baseten is committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status. Company details

Seniority level: Mid-Senior level Employment type: Full-time Job function: Engineering and Information Technology Industries: Venture Capital and Private Equity Principals

#J-18808-Ljbffr