NVIDIA
Senior Software Engineer - NIM Factory Container and Cloud Infrastructure
NVIDIA, Santa Clara, California, us, 95053
Senior Software Engineer - NIM Factory Container and Cloud Infrastructure
NVIDIA is the platform upon which every new AI-powered application is built. We are seeking a Senior Software Engineer focused on container and cloud infrastructure. You will help design and implement our core container strategy for NVIDIA Inference Microservices (NIMs) and our hosted services. You will build enterprise-grade software and tooling for container build, packaging, and deployment. You will help improve reliability, performance, and scale across thousands of GPUs. What You'll Be Doing Design, build, and harden containers for NIM runtimes, inference backends; enable reproducible, multi-arch, CUDA-optimized builds. Develop Python tooling and services for build orchestration, CI/CD integrations, Helm/Operator automation, and test harnesses; enforce quality with typing, linting, and unit/integration tests. Help design and evolve Kubernetes deployment patterns for NIMs, including GPU scheduling, autoscaling, and multi-cluster rollouts. Optimize container performance: layer layout, startup time, build caching, runtime memory/IO, network, and GPU utilization; instrument with metrics and tracing. Evolve the base image strategy, dependency management, and artifact/registry topology. Collaborate across research, backend, SRE, and product teams to ensure day-0 availability of new models. Mentor teammates; set high engineering standards for container quality, security, and operability. What We Need To See 10+ years building production software with a strong focus on containers and Kubernetes. Strong Python skills building production-grade tooling/services Experience with Python SDKs and clients for Kubernetes and cloud services Expert knowledge of Docker/BuildKit, containerd/OCI, image layering, multi-stage builds, and registry workflows. Deep experience operating workloads on Kubernetes. Strong understanding on LLM inference features, including structured output, KV-cache, and LoRa adapter Hands-on experience building and running GPU workloads in k8s, including NVIDIA device plugin, MIG, CUDA drivers/runtime, and resource isolation. Excellent collaboration and communication skills; ability to influence cross-functional design. A degree in Computer Science, Computer Engineering, or a related field (BS or MS) or equivalent experience. Ways To Stand Out From The Crowd Expertise with Helm chart design systems, Operators, and platform APIs serving many teams. Experience with OpenAI API, Hugging Face API as well as understanding difference inference backends (vLLM, SGLang, TRT-LLM) Background in benchmarking and optimizing inference container performance and startup latency at scale. Prior experience designing multi-tenant, multi-cluster, or edge/air-gapped container delivery. Contributions to open-source container, k8s, or GPU ecosystems. NVIDIA offers competitive salaries and a generous benefits package. We are an equal opportunity employer and welcome diversity in our team. Applications for this job will be accepted until September 21, 2025.
#J-18808-Ljbffr
NVIDIA is the platform upon which every new AI-powered application is built. We are seeking a Senior Software Engineer focused on container and cloud infrastructure. You will help design and implement our core container strategy for NVIDIA Inference Microservices (NIMs) and our hosted services. You will build enterprise-grade software and tooling for container build, packaging, and deployment. You will help improve reliability, performance, and scale across thousands of GPUs. What You'll Be Doing Design, build, and harden containers for NIM runtimes, inference backends; enable reproducible, multi-arch, CUDA-optimized builds. Develop Python tooling and services for build orchestration, CI/CD integrations, Helm/Operator automation, and test harnesses; enforce quality with typing, linting, and unit/integration tests. Help design and evolve Kubernetes deployment patterns for NIMs, including GPU scheduling, autoscaling, and multi-cluster rollouts. Optimize container performance: layer layout, startup time, build caching, runtime memory/IO, network, and GPU utilization; instrument with metrics and tracing. Evolve the base image strategy, dependency management, and artifact/registry topology. Collaborate across research, backend, SRE, and product teams to ensure day-0 availability of new models. Mentor teammates; set high engineering standards for container quality, security, and operability. What We Need To See 10+ years building production software with a strong focus on containers and Kubernetes. Strong Python skills building production-grade tooling/services Experience with Python SDKs and clients for Kubernetes and cloud services Expert knowledge of Docker/BuildKit, containerd/OCI, image layering, multi-stage builds, and registry workflows. Deep experience operating workloads on Kubernetes. Strong understanding on LLM inference features, including structured output, KV-cache, and LoRa adapter Hands-on experience building and running GPU workloads in k8s, including NVIDIA device plugin, MIG, CUDA drivers/runtime, and resource isolation. Excellent collaboration and communication skills; ability to influence cross-functional design. A degree in Computer Science, Computer Engineering, or a related field (BS or MS) or equivalent experience. Ways To Stand Out From The Crowd Expertise with Helm chart design systems, Operators, and platform APIs serving many teams. Experience with OpenAI API, Hugging Face API as well as understanding difference inference backends (vLLM, SGLang, TRT-LLM) Background in benchmarking and optimizing inference container performance and startup latency at scale. Prior experience designing multi-tenant, multi-cluster, or edge/air-gapped container delivery. Contributions to open-source container, k8s, or GPU ecosystems. NVIDIA offers competitive salaries and a generous benefits package. We are an equal opportunity employer and welcome diversity in our team. Applications for this job will be accepted until September 21, 2025.
#J-18808-Ljbffr