NVIDIA
Senior ML Platform Engineer, AI Infrastructure
NVIDIA, Redmond, Washington, United States, 98052
Overview
NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. NVIDIA is a “learning machine” that constantly evolves by adapting to new opportunities that are hard to solve, that only we can tackle, and that matter to the world. This is our life’s work, to amplify human imagination and intelligence. Make the choice to join us today! As a member of the GPU AI/HPC Infrastructure team, you will provide leadership in the design and implementation of groundbreaking GPU compute clusters that run demanding deep learning, high performance computing, and computationally intensive workloads. We seek a technical leader to identify architectural changes and/or completely new approaches for our GPU Compute Clusters. You will help address strategic challenges including compute, networking, and storage design for large-scale, high-performance workloads; effective resource utilization in a heterogeneous compute environment; evolving our private/public cloud strategy; capacity modeling; and growth planning across our global computing environment.
What You'll Be Doing
Provide leadership and strategic guidance on the management of large-scale HPC systems including the deployment of compute, networking, and storage.
Develop and improve our ecosystem around GPU-accelerated computing including developing scalable automation solutions.
Build and maintain AI and ML heterogeneous clusters on-premises and in the cloud.
Create and cultivate customer and cross-team relationships to reliably sustain the clusters and meet user evolving needs.
Support our researchers to run their workloads including performance analysis and optimizations.
Conduct root cause analysis and suggest corrective action; proactively find and fix issues before they occur.
What We Need To See
Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience
Minimum 8 years of experience designing and operating large-scale compute infrastructure
Experience with AI/HPC advanced job schedulers, such as Slurm, Kubernetes (K8s), RTDA or LSF
Proficient in administering CentOS/RHEL and/or Ubuntu Linux distributions
Solid understanding of cluster configuration management tools such as Ansible, Puppet, Salt
In-depth understanding of container technologies like Docker, Singularity, Podman, Shifter, Charliecloud
Proficiency in Python programming and Bash scripting
Applied experience with AI/HPC workflows that use MPI
Experience analyzing and tuning performance for a variety of AI/HPC workloads
Passion for continual learning and staying ahead of emerging technologies in HPC and AI/ML infrastructure
Ways To Stand Out From The Crowd
Background with NVIDIA GPUs, CUDA Programming, NCCL and MLPerf benchmarking
Experience with Machine Learning and Deep Learning concepts, algorithms and models
Familiarity with InfiniBand with IBOP and RDMA
Understanding of fast, distributed storage systems like Lustre and GPFS for AI/HPC workloads
Familiarity with deep learning frameworks like PyTorch and TensorFlow
NVIDIA offers highly competitive salaries and a comprehensive benefits package. Our world-class engineering teams are growing fast to meet unprecedented growth. If you’re a creative and autonomous engineer with real passion for technology, we want to hear from you.
Your base salary will be determined based on location, experience, and pay of employees in similar positions. The base salary ranges are 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5. You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until August 24, 2025.
NVIDIA is committed to fostering a diverse work environment and is proud to be an equal opportunity employer. We do not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status, or any other characteristic protected by law.
JR2000712
Seniority level
Mid-Senior level
Employment type
Full-time
Job function
Computer Hardware Manufacturing, Software Development, and Computers and Electronics Manufacturing
#J-18808-Ljbffr
NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. NVIDIA is a “learning machine” that constantly evolves by adapting to new opportunities that are hard to solve, that only we can tackle, and that matter to the world. This is our life’s work, to amplify human imagination and intelligence. Make the choice to join us today! As a member of the GPU AI/HPC Infrastructure team, you will provide leadership in the design and implementation of groundbreaking GPU compute clusters that run demanding deep learning, high performance computing, and computationally intensive workloads. We seek a technical leader to identify architectural changes and/or completely new approaches for our GPU Compute Clusters. You will help address strategic challenges including compute, networking, and storage design for large-scale, high-performance workloads; effective resource utilization in a heterogeneous compute environment; evolving our private/public cloud strategy; capacity modeling; and growth planning across our global computing environment.
What You'll Be Doing
Provide leadership and strategic guidance on the management of large-scale HPC systems including the deployment of compute, networking, and storage.
Develop and improve our ecosystem around GPU-accelerated computing including developing scalable automation solutions.
Build and maintain AI and ML heterogeneous clusters on-premises and in the cloud.
Create and cultivate customer and cross-team relationships to reliably sustain the clusters and meet user evolving needs.
Support our researchers to run their workloads including performance analysis and optimizations.
Conduct root cause analysis and suggest corrective action; proactively find and fix issues before they occur.
What We Need To See
Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience
Minimum 8 years of experience designing and operating large-scale compute infrastructure
Experience with AI/HPC advanced job schedulers, such as Slurm, Kubernetes (K8s), RTDA or LSF
Proficient in administering CentOS/RHEL and/or Ubuntu Linux distributions
Solid understanding of cluster configuration management tools such as Ansible, Puppet, Salt
In-depth understanding of container technologies like Docker, Singularity, Podman, Shifter, Charliecloud
Proficiency in Python programming and Bash scripting
Applied experience with AI/HPC workflows that use MPI
Experience analyzing and tuning performance for a variety of AI/HPC workloads
Passion for continual learning and staying ahead of emerging technologies in HPC and AI/ML infrastructure
Ways To Stand Out From The Crowd
Background with NVIDIA GPUs, CUDA Programming, NCCL and MLPerf benchmarking
Experience with Machine Learning and Deep Learning concepts, algorithms and models
Familiarity with InfiniBand with IBOP and RDMA
Understanding of fast, distributed storage systems like Lustre and GPFS for AI/HPC workloads
Familiarity with deep learning frameworks like PyTorch and TensorFlow
NVIDIA offers highly competitive salaries and a comprehensive benefits package. Our world-class engineering teams are growing fast to meet unprecedented growth. If you’re a creative and autonomous engineer with real passion for technology, we want to hear from you.
Your base salary will be determined based on location, experience, and pay of employees in similar positions. The base salary ranges are 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5. You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until August 24, 2025.
NVIDIA is committed to fostering a diverse work environment and is proud to be an equal opportunity employer. We do not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status, or any other characteristic protected by law.
JR2000712
Seniority level
Mid-Senior level
Employment type
Full-time
Job function
Computer Hardware Manufacturing, Software Development, and Computers and Electronics Manufacturing
#J-18808-Ljbffr