Arkansas Staffing
Senior Deep Learning Software Engineer, Inference
Arkansas Staffing, Santa Clara, California, us, 95053
Senior Software Engineer Specializing In Deep Learning Inference
NVIDIA seeks a senior software engineer specializing in deep learning inference. As a key contributor, you will help design, build, and optimize the GPU-accelerated software that powers todays most sophisticated AI applications. Our team is responsible for developing and maintaining high-performance deep learning frameworks, including SGLang and vLLM, which are at the forefront of efficient large-scale model serving and inference. You will play a central role in improving these platforms, facilitating the smooth deployment and serving of groundbreaking language models. Youll work closely with the deep learning community to implement the latest algorithms for public release in frameworks like SGLang and vLLM, as well as other DL frameworks. Your work will focus on identifying and driving performance improvements for state-of-the-art LLM and generative AI models across NVIDIA accelerators, from datacenter GPUs to edge SoCs. Youll bring to bear open-source tools and pluginsincluding CUTLASS, OAI Triton, NCCL, and CUDA kernelsto implement and optimize model serving pipelines. What youll be doing: Performance optimization, analysis, and tuning of DL models in various domains like LLM, multimodal and generative AI. Scale performance of DL models across different architectures and types of NVIDIA accelerators. Contribute features and code to NVIDIAs inference libraries, vLLM and SGLang, FlashInfer and LLM software solutions. Work with cross-collaborative teams across frameworks, NVIDIA libraries and inference optimization innovative solutions. What we need to see: Masters or PhD or equivalent experience in relevant field (Computer Engineering, Computer Science, EECS, AI). 5+ years of relevant software development experience. Excellent C/C++ programming and software design skills. SW Agile skills are helpful and Python experience is a plus. Prior experience with training, deploying or optimizing the inference of DL models in production is a plus. Prior background with performance modeling, profiling, debug, and code optimization or architectural knowledge of CPU and GPU is a plus. Ways to stand out from the crowd: Contribute to Deep Learning Software projects, such as PyTorch, vLLM, and SGLang to drive advancements in the field. Experience with Multi-GPU Communications (NCCL, NVSHMEM) Experience building and shipping products to enterprise customers. GPU programming experience (CUDA, OAI TRITON or CUTLASS). NVIDIA is at the forefront of breakthroughs in Artificial Intelligence, High-Performance Computing, and Visualization. Our teams are composed of driven, innovative professionals dedicated to pushing the boundaries of technology.
NVIDIA seeks a senior software engineer specializing in deep learning inference. As a key contributor, you will help design, build, and optimize the GPU-accelerated software that powers todays most sophisticated AI applications. Our team is responsible for developing and maintaining high-performance deep learning frameworks, including SGLang and vLLM, which are at the forefront of efficient large-scale model serving and inference. You will play a central role in improving these platforms, facilitating the smooth deployment and serving of groundbreaking language models. Youll work closely with the deep learning community to implement the latest algorithms for public release in frameworks like SGLang and vLLM, as well as other DL frameworks. Your work will focus on identifying and driving performance improvements for state-of-the-art LLM and generative AI models across NVIDIA accelerators, from datacenter GPUs to edge SoCs. Youll bring to bear open-source tools and pluginsincluding CUTLASS, OAI Triton, NCCL, and CUDA kernelsto implement and optimize model serving pipelines. What youll be doing: Performance optimization, analysis, and tuning of DL models in various domains like LLM, multimodal and generative AI. Scale performance of DL models across different architectures and types of NVIDIA accelerators. Contribute features and code to NVIDIAs inference libraries, vLLM and SGLang, FlashInfer and LLM software solutions. Work with cross-collaborative teams across frameworks, NVIDIA libraries and inference optimization innovative solutions. What we need to see: Masters or PhD or equivalent experience in relevant field (Computer Engineering, Computer Science, EECS, AI). 5+ years of relevant software development experience. Excellent C/C++ programming and software design skills. SW Agile skills are helpful and Python experience is a plus. Prior experience with training, deploying or optimizing the inference of DL models in production is a plus. Prior background with performance modeling, profiling, debug, and code optimization or architectural knowledge of CPU and GPU is a plus. Ways to stand out from the crowd: Contribute to Deep Learning Software projects, such as PyTorch, vLLM, and SGLang to drive advancements in the field. Experience with Multi-GPU Communications (NCCL, NVSHMEM) Experience building and shipping products to enterprise customers. GPU programming experience (CUDA, OAI TRITON or CUTLASS). NVIDIA is at the forefront of breakthroughs in Artificial Intelligence, High-Performance Computing, and Visualization. Our teams are composed of driven, innovative professionals dedicated to pushing the boundaries of technology.