Prime Intellect
Member of Technical Staff - Inference
Prime Intellect, San Francisco, California, United States, 94199
Member of Technical Staff - Inference
Join Prime Intellect as a Member of Technical Staff – Inference. Prime Intellect is building the open superintelligence stack – from frontier agentic models to the infrastructure that enables anyone to create, train, and deploy them.
We aggregate and orchestrate global compute into a single control plane and pair it with the full RL post‑training stack: environments, secure sandboxes, verifiable evals, and an async RL trainer. We enable researchers, startups and enterprises to run end‑to‑end reinforcement learning at frontier scale, adapting models to real tools, workflows, and deployment contexts.
We recently raised $15 million in funding (total of $20 million raised) led by Founders Fund, with participation from Menlo Ventures and prominent angels including Andrej Karpathy, Tri Dao, Dylan Patel, Clem Delangue, Emad Mostaque and many others.
Role Impact This is a hybrid position spanning cloud LLM serving, LLM inference optimization and RL systems. You will be working on advancing our ability to evaluate and serve models trained with our Environment Hub at scale. The two key areas are:
Building the infrastructure to serve LLMs efficiently at scale.
Optimization and integration of inference systems into our RL training stack.
Core Technical Responsibilities LLM Serving
Multi‑tenant LLM serving platform that operates across our cloud GPU fleets.
GPU‑aware scheduling algorithms for heterogeneous accelerators.
Multi‑region/zone failover and traffic shifting for resilience and cost control.
Autoscaling, routing, and load balancing to meet throughput/latency SLOs.
Optimize model distribution and cold‑start times across clusters.
Inference Optimization & Performance
Integrate and contribute to LLM inference frameworks such as vLLM, SGLang, TensorRT‑LLM.
Optimize configurations for tensor/pipeline/expert parallelism, prefix caching, memory management and other axes for maximum performance.
Profile kernels, memory bandwidth and transport; apply techniques such as quantization and speculative decoding.
Develop reproducible performance suites (latency, throughput, context length, batch size, precision).
Embed and optimize distributed inference within our RL stack.
Platform & Tooling
Establish CI/CD with artifact promotion, performance gates, and reproducible builds.
Build metrics, logs, tracing; structured incident response and SLO management.
Document architectures, playbooks, and API contracts; mentor and collaborate cross‑functionally.
Technical Requirements Required Experience
3+ years building and running large‑scale ML/LLM services with clear latency/availability SLOs.
Hands‑on with at least one of vLLM, SGLang, TensorRT‑LLM.
Familiarity with distributed and disaggregated serving infrastructure such as NVIDIA Dynamo.
Deep understanding of prefilling vs decoding, KV‑cache behavior, batching, sampling, speculative decoding, parallelism strategies.
Comfortable debugging CUDA/NCCL, drivers/kernels, containers, service mesh/networking, and storage, owning incidents end‑to‑end.
Infrastructure Skills
Python: systems tooling and backend services.
PyTorch: LLM inference engine development and integration, deployment readiness.
AWS/GCP: service experience, cloud deployment patterns.
Kubernetes: running infrastructure at scale with containers on Kubernetes.
GPU & Networking: architecture, CUDA runtime, NCCL, InfiniBand; GPU‑aware bin‑packing and scheduling across heterogeneous fleets.
Nice to Have
CUDA/Triton kernel development; Nsight Systems/Compute profiling.
Systems performance languages: Rust, C++.
Data & Observability: Kafka/PubSub, Redis, gRPC/Protobuf; Prometheus/Grafana, OpenTelemetry; reliability patterns.
Infrastructure‑as‑code: Terraform/Ansible, reproducible environments.
Open source contributions to serving, inference, or RL infrastructure projects.
What we offer
Competitive compensation with significant equity incentives.
Flexible work arrangement (remote or San Francisco office).
Full visa sponsorship and relocation support.
Professional development budget.
Regular team off‑sites and conference attendance.
Opportunity to shape decentralized AI and RL at Prime Intellect.
Growth Opportunity You’ll join a team of experienced engineers and researchers working on cutting‑edge problems in AI infrastructure. We believe in open development and encourage team members to contribute to the broader AI community through research and open‑source contributions.
We value potential over perfection. If you’re passionate about democratizing AI development, we want to talk to you.
Ready to help shape the future of AI? Apply now and join us in our mission to make powerful AI models accessible to everyone.
Job Details
Seniority level: Mid‑Senior level
Employment type: Full‑time
Job function: Engineering and Information Technology
Industries: Software Development
#J-18808-Ljbffr
We aggregate and orchestrate global compute into a single control plane and pair it with the full RL post‑training stack: environments, secure sandboxes, verifiable evals, and an async RL trainer. We enable researchers, startups and enterprises to run end‑to‑end reinforcement learning at frontier scale, adapting models to real tools, workflows, and deployment contexts.
We recently raised $15 million in funding (total of $20 million raised) led by Founders Fund, with participation from Menlo Ventures and prominent angels including Andrej Karpathy, Tri Dao, Dylan Patel, Clem Delangue, Emad Mostaque and many others.
Role Impact This is a hybrid position spanning cloud LLM serving, LLM inference optimization and RL systems. You will be working on advancing our ability to evaluate and serve models trained with our Environment Hub at scale. The two key areas are:
Building the infrastructure to serve LLMs efficiently at scale.
Optimization and integration of inference systems into our RL training stack.
Core Technical Responsibilities LLM Serving
Multi‑tenant LLM serving platform that operates across our cloud GPU fleets.
GPU‑aware scheduling algorithms for heterogeneous accelerators.
Multi‑region/zone failover and traffic shifting for resilience and cost control.
Autoscaling, routing, and load balancing to meet throughput/latency SLOs.
Optimize model distribution and cold‑start times across clusters.
Inference Optimization & Performance
Integrate and contribute to LLM inference frameworks such as vLLM, SGLang, TensorRT‑LLM.
Optimize configurations for tensor/pipeline/expert parallelism, prefix caching, memory management and other axes for maximum performance.
Profile kernels, memory bandwidth and transport; apply techniques such as quantization and speculative decoding.
Develop reproducible performance suites (latency, throughput, context length, batch size, precision).
Embed and optimize distributed inference within our RL stack.
Platform & Tooling
Establish CI/CD with artifact promotion, performance gates, and reproducible builds.
Build metrics, logs, tracing; structured incident response and SLO management.
Document architectures, playbooks, and API contracts; mentor and collaborate cross‑functionally.
Technical Requirements Required Experience
3+ years building and running large‑scale ML/LLM services with clear latency/availability SLOs.
Hands‑on with at least one of vLLM, SGLang, TensorRT‑LLM.
Familiarity with distributed and disaggregated serving infrastructure such as NVIDIA Dynamo.
Deep understanding of prefilling vs decoding, KV‑cache behavior, batching, sampling, speculative decoding, parallelism strategies.
Comfortable debugging CUDA/NCCL, drivers/kernels, containers, service mesh/networking, and storage, owning incidents end‑to‑end.
Infrastructure Skills
Python: systems tooling and backend services.
PyTorch: LLM inference engine development and integration, deployment readiness.
AWS/GCP: service experience, cloud deployment patterns.
Kubernetes: running infrastructure at scale with containers on Kubernetes.
GPU & Networking: architecture, CUDA runtime, NCCL, InfiniBand; GPU‑aware bin‑packing and scheduling across heterogeneous fleets.
Nice to Have
CUDA/Triton kernel development; Nsight Systems/Compute profiling.
Systems performance languages: Rust, C++.
Data & Observability: Kafka/PubSub, Redis, gRPC/Protobuf; Prometheus/Grafana, OpenTelemetry; reliability patterns.
Infrastructure‑as‑code: Terraform/Ansible, reproducible environments.
Open source contributions to serving, inference, or RL infrastructure projects.
What we offer
Competitive compensation with significant equity incentives.
Flexible work arrangement (remote or San Francisco office).
Full visa sponsorship and relocation support.
Professional development budget.
Regular team off‑sites and conference attendance.
Opportunity to shape decentralized AI and RL at Prime Intellect.
Growth Opportunity You’ll join a team of experienced engineers and researchers working on cutting‑edge problems in AI infrastructure. We believe in open development and encourage team members to contribute to the broader AI community through research and open‑source contributions.
We value potential over perfection. If you’re passionate about democratizing AI development, we want to talk to you.
Ready to help shape the future of AI? Apply now and join us in our mission to make powerful AI models accessible to everyone.
Job Details
Seniority level: Mid‑Senior level
Employment type: Full‑time
Job function: Engineering and Information Technology
Industries: Software Development
#J-18808-Ljbffr