Amazon Web Services (AWS)
Software Development Engineer - AI/ML, AWS Neuron, Multimodal Inference
Amazon Web Services (AWS), Seattle, Washington, us, 98127
Software Development Engineer - AI/ML, AWS Neuron, Multimodal Inference
Join to apply for the Software Development Engineer - AI/ML, AWS Neuron, Multimodal Inference role at Amazon Web Services (AWS). The Annapurna Labs team at AWS builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Inferentia and Trainium accelerators. Overview
The AWS Neuron SDK provides an ML compiler, runtime, and application framework that integrates with popular ML frameworks (e.g., PyTorch, JAX) to accelerate inference and training on AWS accelerators. The Inference Enablement and Acceleration team develops scalable, high-performance infrastructure and kernels for ML workloads, working across stacks from frameworks to hardware to enable customers to maximize performance on Neuron hardware. Responsibilities
Design, develop, and optimize machine learning models and frameworks for deployment on AWS custom ML hardware accelerators. Participate in all stages of the ML system development lifecycle, including distributed architecture design, performance profiling, hardware-specific optimizations, testing, and production deployment. Build infrastructure to systematically analyze and onboard multiple models with diverse architectures. Design and implement high-performance kernels and features for ML operations, leveraging the Neuron architecture and programming models. Analyze and optimize system-level performance across multiple generations of Neuron hardware. Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks. Implement optimizations such as fusion, sharding, tiling, and scheduling. Conduct comprehensive testing, including unit and end-to-end model testing with continuous deployment and releases through pipelines. Work directly with customers to enable and optimize their ML models on AWS accelerators. Collaborate across teams to develop innovative optimization techniques. Basic Qualifications
3+ years of non-internship professional software development experience Bachelor's degree in computer science or equivalent 3+ years of design or architecture experience (design patterns, reliability, and scaling) of large systems Fundamentals of machine learning and LLMs, their architecture, training and inference lifecycles, and experience optimizing model execution Software development experience in C++ and Python (at least one language required) Strong understanding of system performance, memory management, and parallel computing principles Proficiency in debugging, profiling, and applying best software engineering practices in large-scale systems Preferred Qualifications
Familiarity with PyTorch, JIT compilation, and AOT tracing Familiarity with CUDA kernels or equivalent ML kernels Experience with performant kernel development (e.g., CUTLASS, FlashInfer) Familiarity with syntax and tile-level semantics similar to Triton Experience with online/offline inference serving with vLLM, SGLang, TensorRT or similar platforms in production Deep understanding of computer architecture, OS-level software, and parallel computing Amazon is an equal opportunity employer and does not discriminate based on protected status. If you require workplace accommodations during the application or interview process, please visit Amazon’s accommodations page for more information.
#J-18808-Ljbffr
Join to apply for the Software Development Engineer - AI/ML, AWS Neuron, Multimodal Inference role at Amazon Web Services (AWS). The Annapurna Labs team at AWS builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Inferentia and Trainium accelerators. Overview
The AWS Neuron SDK provides an ML compiler, runtime, and application framework that integrates with popular ML frameworks (e.g., PyTorch, JAX) to accelerate inference and training on AWS accelerators. The Inference Enablement and Acceleration team develops scalable, high-performance infrastructure and kernels for ML workloads, working across stacks from frameworks to hardware to enable customers to maximize performance on Neuron hardware. Responsibilities
Design, develop, and optimize machine learning models and frameworks for deployment on AWS custom ML hardware accelerators. Participate in all stages of the ML system development lifecycle, including distributed architecture design, performance profiling, hardware-specific optimizations, testing, and production deployment. Build infrastructure to systematically analyze and onboard multiple models with diverse architectures. Design and implement high-performance kernels and features for ML operations, leveraging the Neuron architecture and programming models. Analyze and optimize system-level performance across multiple generations of Neuron hardware. Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks. Implement optimizations such as fusion, sharding, tiling, and scheduling. Conduct comprehensive testing, including unit and end-to-end model testing with continuous deployment and releases through pipelines. Work directly with customers to enable and optimize their ML models on AWS accelerators. Collaborate across teams to develop innovative optimization techniques. Basic Qualifications
3+ years of non-internship professional software development experience Bachelor's degree in computer science or equivalent 3+ years of design or architecture experience (design patterns, reliability, and scaling) of large systems Fundamentals of machine learning and LLMs, their architecture, training and inference lifecycles, and experience optimizing model execution Software development experience in C++ and Python (at least one language required) Strong understanding of system performance, memory management, and parallel computing principles Proficiency in debugging, profiling, and applying best software engineering practices in large-scale systems Preferred Qualifications
Familiarity with PyTorch, JIT compilation, and AOT tracing Familiarity with CUDA kernels or equivalent ML kernels Experience with performant kernel development (e.g., CUTLASS, FlashInfer) Familiarity with syntax and tile-level semantics similar to Triton Experience with online/offline inference serving with vLLM, SGLang, TensorRT or similar platforms in production Deep understanding of computer architecture, OS-level software, and parallel computing Amazon is an equal opportunity employer and does not discriminate based on protected status. If you require workplace accommodations during the application or interview process, please visit Amazon’s accommodations page for more information.
#J-18808-Ljbffr