Logo
NVIDIA

Senior GenAI Algorithms Engineer — Model Optimizations for Inference

NVIDIA, Santa Clara, California, us, 95053

Save Job

Overview

NVIDIA is at the forefront of the generative AI revolution. The Algorithmic Model Optimization Team focuses on optimizing generative AI models (LLMs, diffusion models, VLMs, multimodal) for maximal inference efficiency using techniques such as quantization, speculative decoding, sparsity, distillation, pruning, neural architecture search, and streamlined deployment strategies with open-sourced inference frameworks. In this role, you will design, implement, and productionize model optimization algorithms for inference and deployment on NVIDIA’s latest hardware platforms with a focus on ease of use, compute and memory efficiency, and strong accuracy–performance tradeoffs through software–hardware co-design. Responsibilities

Design and build modular, scalable model optimization software platforms that deliver exceptional user experiences while supporting diverse AI models and optimization techniques to drive widespread adoption. Explore, develop, and integrate innovative deep learning optimization algorithms (e.g., quantization, speculative decoding, sparsity) into NVIDIA's AI software stack (TensorRT Model Optimizer, NeMo/Megatron, TensorRT-LLM). Deploy optimized models into leading OSS inference frameworks and contribute specialized APIs, model-level optimizations, and new features tailored to the latest NVIDIA hardware capabilities. Partner with NVIDIA teams to deliver model optimization solutions for customer use cases, ensuring optimal end-to-end workflows and balanced accuracy–performance trade-offs. Conduct deep GPU kernel-level profiling to identify and capitalize on hardware and software optimization opportunities (e.g., efficient attention kernels, KV cache optimization, parallelism strategies). Drive continuous innovation in deep learning inference performance to strengthen NVIDIA platform integration and expand market adoption across the AI inference ecosystem. Qualifications

Master’s, PhD, or equivalent experience in Computer Science, Artificial Intelligence, Applied Mathematics, or a related field. 5+ years of relevant work or research experience in deep learning. Strong software design skills, including debugging, performance analysis, and test development. Proficiency in Python, PyTorch, and modern ML frameworks/tools. Proven foundation in algorithms and programming fundamentals. Strong written and verbal communication skills, with the ability to work both independently and collaboratively in a fast-paced environment. Ways To Stand Out

Contributions to PyTorch, JAX, vLLM, SGLang, or other machine learning training and inference frameworks. Hands-on experience training or fine-tuning generative AI models on large-scale GPU clusters. Proficient in GPU architectures and compilation stacks, adept at analyzing and debugging end-to-end performance. Familiarity with NVIDIA’s deep learning SDKs (e.g., TensorRT). Experience developing high-performance GPU kernels for ML workloads using CUDA, CUTLASS, or Triton. Additional Information

Increasingly known as “the AI computing company,” NVIDIA offers competitive salaries and a comprehensive benefits package. Your base salary will be determined based on location, experience, and pay of employees in similar positions. The base salary range is 148,000 USD - 235,750 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4. You will also be eligible for equity and benefits. Applications for this job will be accepted at least until September 26, 2025. NVIDIA is committed to fostering a diverse work environment and is an equal opportunity employer. We do not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

#J-18808-Ljbffr