Amazon Web Services (AWS)
Software Engineer - AI/ML, AWS Neuron Apps
Amazon Web Services (AWS), Seattle, Washington, us, 98127
Overview
Shape the Future of AI Accelerators at AWS Neuron. Join the elite team behind AWS Neuron—the software stack powering AWS's next-generation AI accelerators Inferentia and Trainium. As a Senior Software Engineer in our Machine Learning Applications team, you\'ll be at the forefront of deploying and optimizing some of the world\'s most sophisticated AI models at unprecedented scale. What You\'ll Impact
Pioneer distributed inference solutions for industry-leading LLMs such as GPT, Llama, Qwen Optimize breakthrough language and vision generative AI models Collaborate directly with silicon architects and compiler teams to push the boundaries of AI acceleration Drive performance benchmarking and tuning that directly impacts millions of inference calls globally Key Responsibilities
You will drive the evolution of distributed AI at AWS Neuron You\'ll develop the bridge between ML frameworks including PyTorch, JAX and AI hardware. This isn\'t just about optimization—it's about revolutionizing how AI models run at scale. Spearhead distributed inference architecture for PyTorch and JAX using XLA Engineer breakthrough performance optimizations for AWS Trainium and Inferentia Develop ML tools to enhance LLM accuracy and efficiency Transform complex tensor operations into highly optimized hardware implementations Pioneer benchmarking methodologies that shape next-gen AI accelerator design What Makes This Role Unique
Direct influence on AWS\'s AI infrastructure used by thousands of ML applications Full-stack optimization from high-level frameworks to hardware-specific primitives Creation of tools and frameworks that define industry standards for ML deployment Collaboration with open-source ML communities and hardware architecture teams Your Technical Arsenal
Deep expertise in Python and ML framework internals Strong understanding of distributed systems and ML optimization Passion for performance tuning and system architecture Experience and Team Context
AWS Neuron focuses on distributed inference for AI workloads, with emphasis on large language model optimization, architecture-aware performance tuning, and scalable deployment. Basic Qualifications
3+ years of computer science fundamentals (object-oriented design, data structures, algorithm design, problem solving and complexity analysis) 3+ years of programming experience using Python or C++ and PyTorch Experience with AI acceleration via quantization, parallelism, model compression, batching, KV caching, vllm serving Experience with accuracy debugging & tooling, performance benchmarking of AI accelerators Fundamentals of machine learning and deep learning models, their architecture, training and inference lifecycles, with work experience on optimizations for improving model execution Preferred Qualifications
Bachelor\'s degree in computer science or equivalent Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. If you require a workplace accommodation during the application and hiring process, including support for the interview or onboarding, please visit the accommodations page. Our compensation reflects the cost of labor across US geographic markets. The base pay ranges and other compensation details are provided for context. This position will remain posted until filled. Applicants should apply via our internal or external career site.
#J-18808-Ljbffr
Shape the Future of AI Accelerators at AWS Neuron. Join the elite team behind AWS Neuron—the software stack powering AWS's next-generation AI accelerators Inferentia and Trainium. As a Senior Software Engineer in our Machine Learning Applications team, you\'ll be at the forefront of deploying and optimizing some of the world\'s most sophisticated AI models at unprecedented scale. What You\'ll Impact
Pioneer distributed inference solutions for industry-leading LLMs such as GPT, Llama, Qwen Optimize breakthrough language and vision generative AI models Collaborate directly with silicon architects and compiler teams to push the boundaries of AI acceleration Drive performance benchmarking and tuning that directly impacts millions of inference calls globally Key Responsibilities
You will drive the evolution of distributed AI at AWS Neuron You\'ll develop the bridge between ML frameworks including PyTorch, JAX and AI hardware. This isn\'t just about optimization—it's about revolutionizing how AI models run at scale. Spearhead distributed inference architecture for PyTorch and JAX using XLA Engineer breakthrough performance optimizations for AWS Trainium and Inferentia Develop ML tools to enhance LLM accuracy and efficiency Transform complex tensor operations into highly optimized hardware implementations Pioneer benchmarking methodologies that shape next-gen AI accelerator design What Makes This Role Unique
Direct influence on AWS\'s AI infrastructure used by thousands of ML applications Full-stack optimization from high-level frameworks to hardware-specific primitives Creation of tools and frameworks that define industry standards for ML deployment Collaboration with open-source ML communities and hardware architecture teams Your Technical Arsenal
Deep expertise in Python and ML framework internals Strong understanding of distributed systems and ML optimization Passion for performance tuning and system architecture Experience and Team Context
AWS Neuron focuses on distributed inference for AI workloads, with emphasis on large language model optimization, architecture-aware performance tuning, and scalable deployment. Basic Qualifications
3+ years of computer science fundamentals (object-oriented design, data structures, algorithm design, problem solving and complexity analysis) 3+ years of programming experience using Python or C++ and PyTorch Experience with AI acceleration via quantization, parallelism, model compression, batching, KV caching, vllm serving Experience with accuracy debugging & tooling, performance benchmarking of AI accelerators Fundamentals of machine learning and deep learning models, their architecture, training and inference lifecycles, with work experience on optimizations for improving model execution Preferred Qualifications
Bachelor\'s degree in computer science or equivalent Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. If you require a workplace accommodation during the application and hiring process, including support for the interview or onboarding, please visit the accommodations page. Our compensation reflects the cost of labor across US geographic markets. The base pay ranges and other compensation details are provided for context. This position will remain posted until filled. Applicants should apply via our internal or external career site.
#J-18808-Ljbffr