NVIDIA
Overview
NVIDIA is at the forefront of innovations in Artificial Intelligence, High-Performance Computing, and Visualization. We are looking for a ML Platform Engineer to help accelerate the next era of machine learning innovation. You will architect, scale, and optimize high-performance ML infrastructure used across NVIDIA's AI research and product teams, enabling scientists and engineers to train, fine-tune, and deploy advanced ML models on powerful GPU systems. What You'll Be Doing
Design, build, and maintain scalable ML platforms and infrastructure for training and inference on large-scale, distributed GPU clusters. Develop internal tools and automation for ML workflow orchestration, resource scheduling, data access, and reproducibility. Collaborate with ML researchers and applied scientists to optimize performance and streamline end-to-end experimentation. Evolve and operate multi-cloud and hybrid (on-prem + cloud) environments with a focus on high availability and performance for AI workloads. Define and monitor ML-specific infrastructure metrics, such as model efficiency, resource utilization, job success rates, and pipeline latency. Build tooling to support experimentation tracking, reproducibility, model versioning, and artifact management. Participate in on-call support for platform services and infrastructure running critical ML jobs. Drive the adoption of modern GPU technologies and ensure smooth integration of next-generation hardware into ML pipelines (e.g., GB200, NVLink, etc.). What We Need To See
BS/MS in Computer Science, Engineering, or equivalent experience. 7+ years in software/platform engineering, including 3+ years in ML infrastructure or distributed compute systems. Solid understanding of ML training/inference workflows and lifecycle—from data preprocessing to deployment. Proficiency in crafting and operating containerized workloads with Kubernetes, Docker, and workload schedulers. Experience with ML orchestration tools such as Kubeflow, Flyte, Airflow, or Ray. Strong coding skills in languages such as Python, Go, or Rust. Experience running Slurm or custom scheduling frameworks in production ML environments. Familiarity with GPU computing, Linux systems internals, and performance tuning at scale. Ways To Stand Out From The Crowd
Experience building or operating ML platforms supporting frameworks like PyTorch, TensorFlow, or JAX at scale. Deep understanding of distributed training techniques (e.g., data/model parallelism, Horovod, NCCL). Expertise with infrastructure-as-code tools (Terraform, Ansible) and modern CI/CD methodologies. Passion for building developer-centric platforms with great UX and strong operational reliability. Compensation and Benefits
Your base salary will be determined based on location, experience, and pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5. You will also be eligible for equity and benefits. Application and Diversity Statement
Applications for this job will be accepted at least until September 21, 2025. NVIDIA is committed to fostering a diverse work environment and is proud to be an equal opportunity employer. We value diversity in our current and future employees and do not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status, or any other characteristic protected by law. JR2001434
#J-18808-Ljbffr
NVIDIA is at the forefront of innovations in Artificial Intelligence, High-Performance Computing, and Visualization. We are looking for a ML Platform Engineer to help accelerate the next era of machine learning innovation. You will architect, scale, and optimize high-performance ML infrastructure used across NVIDIA's AI research and product teams, enabling scientists and engineers to train, fine-tune, and deploy advanced ML models on powerful GPU systems. What You'll Be Doing
Design, build, and maintain scalable ML platforms and infrastructure for training and inference on large-scale, distributed GPU clusters. Develop internal tools and automation for ML workflow orchestration, resource scheduling, data access, and reproducibility. Collaborate with ML researchers and applied scientists to optimize performance and streamline end-to-end experimentation. Evolve and operate multi-cloud and hybrid (on-prem + cloud) environments with a focus on high availability and performance for AI workloads. Define and monitor ML-specific infrastructure metrics, such as model efficiency, resource utilization, job success rates, and pipeline latency. Build tooling to support experimentation tracking, reproducibility, model versioning, and artifact management. Participate in on-call support for platform services and infrastructure running critical ML jobs. Drive the adoption of modern GPU technologies and ensure smooth integration of next-generation hardware into ML pipelines (e.g., GB200, NVLink, etc.). What We Need To See
BS/MS in Computer Science, Engineering, or equivalent experience. 7+ years in software/platform engineering, including 3+ years in ML infrastructure or distributed compute systems. Solid understanding of ML training/inference workflows and lifecycle—from data preprocessing to deployment. Proficiency in crafting and operating containerized workloads with Kubernetes, Docker, and workload schedulers. Experience with ML orchestration tools such as Kubeflow, Flyte, Airflow, or Ray. Strong coding skills in languages such as Python, Go, or Rust. Experience running Slurm or custom scheduling frameworks in production ML environments. Familiarity with GPU computing, Linux systems internals, and performance tuning at scale. Ways To Stand Out From The Crowd
Experience building or operating ML platforms supporting frameworks like PyTorch, TensorFlow, or JAX at scale. Deep understanding of distributed training techniques (e.g., data/model parallelism, Horovod, NCCL). Expertise with infrastructure-as-code tools (Terraform, Ansible) and modern CI/CD methodologies. Passion for building developer-centric platforms with great UX and strong operational reliability. Compensation and Benefits
Your base salary will be determined based on location, experience, and pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5. You will also be eligible for equity and benefits. Application and Diversity Statement
Applications for this job will be accepted at least until September 21, 2025. NVIDIA is committed to fostering a diverse work environment and is proud to be an equal opportunity employer. We value diversity in our current and future employees and do not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status, or any other characteristic protected by law. JR2001434
#J-18808-Ljbffr