NVIDIA
Senior Performance and Development Engineer
Join to apply for the
Senior Performance and Development Engineer
role at
NVIDIA
Joining NVIDIA's AI Efficiency Team means contributing to the infrastructure that powers our leading‑edge AI research. This team focuses on optimizing efficiency and resiliency of ML workloads, as well as developing scalable AI infrastructure tools and services. Our objective is to deliver a stable, scalable environment for NVIDIA's AI researchers, providing them with the necessary resources and scale to foster innovation.
We're redefining the way Deep Learning applications run on tens of thousands of GPUs. Join our team of experts and help us build a supercharged AI platform that improves efficiency, resilience, and Model FLOPs Utilization (MFU). In this position you will be collaborating with a diverse team that cuts across many areas of the Deep Learning HW/SW stack in building a highly scalable, fault‑tolerant and optimized AI platform.
What You Will Be Doing
Build AI models, tools and frameworks that provide real‑time application performance metrics that can be correlated with system metrics
Develop automation frameworks that empower applications to thoughtfully predict and overcome system/infrastructure failures, ensuring fault tolerance
Collaborate with software teams to pinpoint performance bottlenecks. Design, prototype, and integrate solutions that deliver demonstrable performance gains in production environments
Adapt and enhance communication libraries to seamlessly support innovative network topologies and system architectures
Design or adapt optimized storage solutions to boost Deep Learning efficiency, resilience, and developer productivity
What We Need To See
BS/MS/PhD (or equivalent experience) in Computer Science, Electrical Engineering or a related field
12+ years of proven experience in analyzing and improving performance of training applications using PyTorch or a similar framework
Experience building distributed software applications using collective communication libraries such as MPI, NCCL or UCC
Constructing storage solutions for Deep Learning applications
Building automated fault‑tolerant distributed applications
Building tools for bottleneck analysis and automation of fault tolerance in distributed environments
Strong background in parallel programming and distributed systems
Experience analyzing and optimizing large‑scale distributed applications
Excellent verbal and written communication skills
Ways To Stand Out From The Crowd
Deep understanding of HPC and distributed system architecture
Hands‑on experience in more than one of the above areas, especially with large SOTA AI models, performance analysis and profiling of Deep Learning workloads
Comfortable navigating and working with the PyTorch codebase
Proven understanding of CUDA and GPU architecture
Salary & Benefits Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is $224,000‑$356,500 USD for Level 5, and $272,000‑$425,500 USD for Level 6.
You will also be eligible for equity and benefits.
EEO Statement NVIDIA is committed to fostering a diverse work environment and is an equal‑opportunity employer. We do not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
#J-18808-Ljbffr
Senior Performance and Development Engineer
role at
NVIDIA
Joining NVIDIA's AI Efficiency Team means contributing to the infrastructure that powers our leading‑edge AI research. This team focuses on optimizing efficiency and resiliency of ML workloads, as well as developing scalable AI infrastructure tools and services. Our objective is to deliver a stable, scalable environment for NVIDIA's AI researchers, providing them with the necessary resources and scale to foster innovation.
We're redefining the way Deep Learning applications run on tens of thousands of GPUs. Join our team of experts and help us build a supercharged AI platform that improves efficiency, resilience, and Model FLOPs Utilization (MFU). In this position you will be collaborating with a diverse team that cuts across many areas of the Deep Learning HW/SW stack in building a highly scalable, fault‑tolerant and optimized AI platform.
What You Will Be Doing
Build AI models, tools and frameworks that provide real‑time application performance metrics that can be correlated with system metrics
Develop automation frameworks that empower applications to thoughtfully predict and overcome system/infrastructure failures, ensuring fault tolerance
Collaborate with software teams to pinpoint performance bottlenecks. Design, prototype, and integrate solutions that deliver demonstrable performance gains in production environments
Adapt and enhance communication libraries to seamlessly support innovative network topologies and system architectures
Design or adapt optimized storage solutions to boost Deep Learning efficiency, resilience, and developer productivity
What We Need To See
BS/MS/PhD (or equivalent experience) in Computer Science, Electrical Engineering or a related field
12+ years of proven experience in analyzing and improving performance of training applications using PyTorch or a similar framework
Experience building distributed software applications using collective communication libraries such as MPI, NCCL or UCC
Constructing storage solutions for Deep Learning applications
Building automated fault‑tolerant distributed applications
Building tools for bottleneck analysis and automation of fault tolerance in distributed environments
Strong background in parallel programming and distributed systems
Experience analyzing and optimizing large‑scale distributed applications
Excellent verbal and written communication skills
Ways To Stand Out From The Crowd
Deep understanding of HPC and distributed system architecture
Hands‑on experience in more than one of the above areas, especially with large SOTA AI models, performance analysis and profiling of Deep Learning workloads
Comfortable navigating and working with the PyTorch codebase
Proven understanding of CUDA and GPU architecture
Salary & Benefits Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is $224,000‑$356,500 USD for Level 5, and $272,000‑$425,500 USD for Level 6.
You will also be eligible for equity and benefits.
EEO Statement NVIDIA is committed to fostering a diverse work environment and is an equal‑opportunity employer. We do not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
#J-18808-Ljbffr