Amazon Web Services (AWS)
Sr. ML Kernel Performance Engineer, AWS Neuron, Annapurna Labs
Amazon Web Services (AWS), Cupertino, California, United States, 95014
Sr. ML Kernel Performance Engineer, AWS Neuron, Annapurna Labs
The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon’s custom machine learning accelerators, Inferentia and Trainium. The Acceleration Kernel Library team is at the forefront of maximizing performance for AWS's custom ML accelerators. Working at the hardware-software boundary, our engineers craft high-performance kernels for ML functions, ensuring every FLOP counts in delivering optimal performance for our customers' demanding workloads. We combine deep hardware knowledge with ML expertise to push the boundaries of what's possible in AI acceleration. The AWS Neuron SDK, developed by the Annapurna Labs team at AWS, is the backbone for accelerating deep learning and GenAI workloads on Amazon's Inferentia and Trainium ML accelerators. This comprehensive toolkit includes an ML compiler, runtime, and application framework that seamlessly integrates with popular ML frameworks like PyTorch, enabling unparalleled ML inference and training performance. As part of the broader Neuron Compiler organization, our team works across multiple technology layers - from frameworks and compilers to runtime and collectives. We not only optimize current performance but also contribute to future architecture designs, working closely with customers to enable their models and ensure optimal performance. This role offers a unique opportunity to work at the intersection of machine learning, high-performance computing, and distributed architectures, where you'll help shape the future of AI acceleration technology Key job responsibilities include: Design and implement high-performance compute kernels for ML operations, leveraging the Neuron architecture and programming models Analyze and optimize kernel-level performance across multiple generations of Neuron hardware Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks Implement compiler optimizations such as fusion, sharding, tiling, and scheduling Work directly with customers to enable and optimize their ML models on AWS accelerators Collaborate across teams to develop innovative kernel optimization techniques Basic Qualifications: 5+ years of non-internship professional software development experience 5+ years of programming with at least one software programming language experience 5+ years of leading design or architecture (design patterns, reliability and scaling) of new and existing systems experience 5+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Experience as a mentor, tech lead or leading an engineering team Preferred Qualifications: Bachelor's degree in computer science or equivalent 6+ years of full software development experience Expertise in accelerator architectures for ML or HPC such as GPUs, CPUs, FPGAs, or custom architectures Experience with GPU kernel optimization and GPGPU computing such as CUDA, NKI, Triton, OpenCL, SYCL, or ROCm Demonstrated experience with NVIDIA PTX and/or AMD GPU ISA Experience developing high performance libraries for HPC applications Proficiency in low-level performance optimization for GPUs Experience with LLVM/MLIR backend development for GPUs Knowledge of ML frameworks (PyTorch, TensorFlow) and their GPU backends Experience with parallel programming and optimization techniques Understanding of GPU memory hierarchies and optimization strategies Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
#J-18808-Ljbffr
The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon’s custom machine learning accelerators, Inferentia and Trainium. The Acceleration Kernel Library team is at the forefront of maximizing performance for AWS's custom ML accelerators. Working at the hardware-software boundary, our engineers craft high-performance kernels for ML functions, ensuring every FLOP counts in delivering optimal performance for our customers' demanding workloads. We combine deep hardware knowledge with ML expertise to push the boundaries of what's possible in AI acceleration. The AWS Neuron SDK, developed by the Annapurna Labs team at AWS, is the backbone for accelerating deep learning and GenAI workloads on Amazon's Inferentia and Trainium ML accelerators. This comprehensive toolkit includes an ML compiler, runtime, and application framework that seamlessly integrates with popular ML frameworks like PyTorch, enabling unparalleled ML inference and training performance. As part of the broader Neuron Compiler organization, our team works across multiple technology layers - from frameworks and compilers to runtime and collectives. We not only optimize current performance but also contribute to future architecture designs, working closely with customers to enable their models and ensure optimal performance. This role offers a unique opportunity to work at the intersection of machine learning, high-performance computing, and distributed architectures, where you'll help shape the future of AI acceleration technology Key job responsibilities include: Design and implement high-performance compute kernels for ML operations, leveraging the Neuron architecture and programming models Analyze and optimize kernel-level performance across multiple generations of Neuron hardware Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks Implement compiler optimizations such as fusion, sharding, tiling, and scheduling Work directly with customers to enable and optimize their ML models on AWS accelerators Collaborate across teams to develop innovative kernel optimization techniques Basic Qualifications: 5+ years of non-internship professional software development experience 5+ years of programming with at least one software programming language experience 5+ years of leading design or architecture (design patterns, reliability and scaling) of new and existing systems experience 5+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Experience as a mentor, tech lead or leading an engineering team Preferred Qualifications: Bachelor's degree in computer science or equivalent 6+ years of full software development experience Expertise in accelerator architectures for ML or HPC such as GPUs, CPUs, FPGAs, or custom architectures Experience with GPU kernel optimization and GPGPU computing such as CUDA, NKI, Triton, OpenCL, SYCL, or ROCm Demonstrated experience with NVIDIA PTX and/or AMD GPU ISA Experience developing high performance libraries for HPC applications Proficiency in low-level performance optimization for GPUs Experience with LLVM/MLIR backend development for GPUs Knowledge of ML frameworks (PyTorch, TensorFlow) and their GPU backends Experience with parallel programming and optimization techniques Understanding of GPU memory hierarchies and optimization strategies Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
#J-18808-Ljbffr