NVIDIA
Senior Software Engineer - NIM Factory Container and Cloud Infrastructure
NVIDIA, Granite Heights, Wisconsin, United States
Senior Software Engineer - NIM Factory Container and Cloud Infrastructure
NVIDIA is the platform upon which every new AI‑powered application is built. We are seeking a Senior Software Engineer focused on container and cloud infrastructure. You will help design and implement our core container strategy for NVIDIA Inference Microservices (NIMs) and our hosted services. You will build enterprise‑grade software and tooling for container build, packaging, and deployment. You will help improve reliability, performance, and scale across thousands of GPUs. There is much more to build ahead, including support for disaggregated LLM inference and other emerging deployment patterns.
What You’ll Be Doing
Design, build, and harden containers for NIM runtimes, inference backends; enable reproducible, multi‑arch, CUDA‑optimized builds.
Develop Python tooling and services for build orchestration, CI/CD integrations, Helm/Operator automation, and test harnesses; enforce quality with typing, linting, and unit/integration tests.
Help design and evolve Kubernetes deployment patterns for NIMs, including GPU scheduling, autoscaling, and multi‑cluster rollouts.
Optimize container performance: layer layout, startup time, build caching, runtime memory/IO, network, and GPU utilization; instrument with metrics and tracing.
Evolve the base image strategy, dependency management, and artifact/registry topology.
Collaborate across research, backend, SRE, and product teams to ensure day‑0 availability of new models.
Mentor teammates; set high engineering standards for container quality, security, and operability.
What We Need To See
10+ years building production software with a strong focus on containers and Kubernetes.
Strong Python skills building production‑grade tooling/services.
Experience with Python SDKs and clients for Kubernetes and cloud services.
Expert knowledge of Docker/BuildKit, containerd/OCI, image layering, multi‑stage builds, and registry workflows.
Deep experience operating workloads on Kubernetes.
Strong understanding of LLM inference features, including structured output, KV‑cache, and LoRa adapter.
Hands‑on experience building and running GPU workloads in k8s, including NVIDIA device plugin, MIG, CUDA drivers/runtime, and resource isolation.
Excellent collaboration and communication skills; ability to influence cross‑functional design.
A degree in Computer Science, Computer Engineering, or a related field (BS or MS) or equivalent experience.
Ways to Stand Out
Expertise with Helm chart design systems, Operators, and platform APIs serving many teams.
Experience with OpenAI API, Hugging Face API, and understanding difference inference backends (vLLM, SGLang, TRT‑LLM).
Background in benchmarking and optimizing inference container performance and startup latency at scale.
Prior experience designing multi‑tenant, multi‑cluster, or edge/air‑gapped container delivery.
Contributions to open‑source container, k8s, or GPU ecosystems.
With competitive salaries and a generous benefits package, NVIDIA is widely considered to be one of the technology industry's most desirable employers. We have some of the most forward‑thinking and versatile people in the world working with us, and our engineering teams are growing fast in some of the most impactful fields of our generation: Deep Learning, Artificial Intelligence, and Autonomous Vehicles. If you’re a creative engineer who enjoys autonomy and shares our passion for technology, we want to hear from you.
Base salary: $184,000 – $287,500 for Level 4 and $224,000 – $356,500 for Level 5.
Equity and benefits are also part of the compensation package.
Applications will be accepted at least until November 1, 2025.
NVIDIA is committed to fostering a diverse work environment and is an equal opportunity employer. We do not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Job reference: JR2004245
Seniority Level Mid‑Senior level
Employment Type Full‑time
Industries Computer Hardware Manufacturing, Software Development, and Computers and Electronics Manufacturing
#J-18808-Ljbffr
What You’ll Be Doing
Design, build, and harden containers for NIM runtimes, inference backends; enable reproducible, multi‑arch, CUDA‑optimized builds.
Develop Python tooling and services for build orchestration, CI/CD integrations, Helm/Operator automation, and test harnesses; enforce quality with typing, linting, and unit/integration tests.
Help design and evolve Kubernetes deployment patterns for NIMs, including GPU scheduling, autoscaling, and multi‑cluster rollouts.
Optimize container performance: layer layout, startup time, build caching, runtime memory/IO, network, and GPU utilization; instrument with metrics and tracing.
Evolve the base image strategy, dependency management, and artifact/registry topology.
Collaborate across research, backend, SRE, and product teams to ensure day‑0 availability of new models.
Mentor teammates; set high engineering standards for container quality, security, and operability.
What We Need To See
10+ years building production software with a strong focus on containers and Kubernetes.
Strong Python skills building production‑grade tooling/services.
Experience with Python SDKs and clients for Kubernetes and cloud services.
Expert knowledge of Docker/BuildKit, containerd/OCI, image layering, multi‑stage builds, and registry workflows.
Deep experience operating workloads on Kubernetes.
Strong understanding of LLM inference features, including structured output, KV‑cache, and LoRa adapter.
Hands‑on experience building and running GPU workloads in k8s, including NVIDIA device plugin, MIG, CUDA drivers/runtime, and resource isolation.
Excellent collaboration and communication skills; ability to influence cross‑functional design.
A degree in Computer Science, Computer Engineering, or a related field (BS or MS) or equivalent experience.
Ways to Stand Out
Expertise with Helm chart design systems, Operators, and platform APIs serving many teams.
Experience with OpenAI API, Hugging Face API, and understanding difference inference backends (vLLM, SGLang, TRT‑LLM).
Background in benchmarking and optimizing inference container performance and startup latency at scale.
Prior experience designing multi‑tenant, multi‑cluster, or edge/air‑gapped container delivery.
Contributions to open‑source container, k8s, or GPU ecosystems.
With competitive salaries and a generous benefits package, NVIDIA is widely considered to be one of the technology industry's most desirable employers. We have some of the most forward‑thinking and versatile people in the world working with us, and our engineering teams are growing fast in some of the most impactful fields of our generation: Deep Learning, Artificial Intelligence, and Autonomous Vehicles. If you’re a creative engineer who enjoys autonomy and shares our passion for technology, we want to hear from you.
Base salary: $184,000 – $287,500 for Level 4 and $224,000 – $356,500 for Level 5.
Equity and benefits are also part of the compensation package.
Applications will be accepted at least until November 1, 2025.
NVIDIA is committed to fostering a diverse work environment and is an equal opportunity employer. We do not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Job reference: JR2004245
Seniority Level Mid‑Senior level
Employment Type Full‑time
Industries Computer Hardware Manufacturing, Software Development, and Computers and Electronics Manufacturing
#J-18808-Ljbffr