Amazon Web Services (AWS)
ML Acceleration / Framework Engineer - Distributed Training & Inference, AWS Neu
Amazon Web Services (AWS), Seattle, Washington, us, 98127
Description
By applying to this position, your application will be considered for all locations we hire for in the United States. Role
AWS Neuron is the complete software stack for the AWS Trainium (Trn1/Trn2) and Inferentia (Inf1/Inf2) cloud-scale ML accelerators. This role is for a Machine Learning Engineer on one of our AWS Neuron teams. The ML Distributed Training team works side by side with chip architects, compiler engineers and runtime engineers to create, build and tune distributed training solutions with Trainium instances. Experience with training these large models using Python is a must. FSDP (Fully-Sharded Data Parallel), Deepspeed, Nemo and other distributed training libraries are central to this and extending all of this for the Neuron based system is key. ML Frameworks partners with compiler, runtime, and research experts to make AWSTrainium and Inferentia feel native inside the tools builders already love—PyTorch, JAX, and the rapidly evolving vLLM ecosystem. By weaving NeuronSDK deep into these frameworks, optimizing operators, and crafting targeted extensions, we unlock every teraflop of Annapurna’s AI chips for both training and lightning-fast inference. Beyond kernels, we shape next-generation serving by upstreaming new features and driving scalable deployments with vLLM, Triton, and TensorRT—turning breakthrough ideas into production-ready AI for millions of customers. The ML Inference team collaborates closely with hardware designers, software optimization experts, and systems engineers to develop and optimize high-performance inference solutions for Inferentia chips. Proficiency in deploying and optimizing ML models for inference using frameworks like TensorFlow, PyTorch, and ONNX is essential. The team focuses on techniques such as quantization, pruning, and model compression to enhance inference speed and efficiency. Adapting and extending popular inference libraries and tools for Neuron-based systems is a key aspect of their work. Key job responsibilities
You'll join one of our core ML teams - Frameworks, Distributed Training, or Inference - to enhance machine learning capabilities on AWS\'s specialized AI hardware. Your responsibilities will include improving PyTorch and JAX for distributed training on Trainium chips, optimizing ML models for efficient inference on Inferentia processors, and collaborating with compiler and runtime teams to maximize hardware performance. You\'ll also develop and integrate new features in ML frameworks to support AWS AI services. We seek candidates with strong programming skills, eagerness to learn complex systems, and basic ML knowledge. This role offers growth opportunities in ML infrastructure, bridging the gap between frameworks, distributed systems, and hardware acceleration. About The Team
Annapurna Labs was a startup company acquired by AWS in 2015, and is now fully integrated. If AWS is an infrastructure company, then think Annapurna Labs as the infrastructure provider of AWS. Our org covers multiple disciplines including silicon engineering, hardware design and verification, software, and operations. AWS Neo tron, ENA, EFA, Graviton and F1 EC2 Instances, AWS Neuron, Inferentia and Trainium ML Accelerators, and in storage with scalable NVMe, are some of the products we have delivered, over the last few years. Basic Qualifications
To qualify, applicants should have earned (or will earn) a Bachelors or Masters degree between December 2022 and September 2025. Working knowledge of C++ and Python Experience with ML frameworks, particularly PyTorch, Jax, and/or vLLM Understanding of parallel computing concepts and CUDA programming Preferred Qualifications
Open source contributions to ML frameworks or tools Experience optimizing ML workloads for performance Direct experience with PyTorch internals or CUDA optimization Hands-on experience with LLM infrastructure tools (e.g., vLLM, TensorRT) Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $99,500/year in our lowest geographic market up to $200,000/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. This position will remain posted until filled. Applicants should apply via our internal or external career site. Company - Annapurna Labs (U.S.) Inc.
#J-18808-Ljbffr
By applying to this position, your application will be considered for all locations we hire for in the United States. Role
AWS Neuron is the complete software stack for the AWS Trainium (Trn1/Trn2) and Inferentia (Inf1/Inf2) cloud-scale ML accelerators. This role is for a Machine Learning Engineer on one of our AWS Neuron teams. The ML Distributed Training team works side by side with chip architects, compiler engineers and runtime engineers to create, build and tune distributed training solutions with Trainium instances. Experience with training these large models using Python is a must. FSDP (Fully-Sharded Data Parallel), Deepspeed, Nemo and other distributed training libraries are central to this and extending all of this for the Neuron based system is key. ML Frameworks partners with compiler, runtime, and research experts to make AWSTrainium and Inferentia feel native inside the tools builders already love—PyTorch, JAX, and the rapidly evolving vLLM ecosystem. By weaving NeuronSDK deep into these frameworks, optimizing operators, and crafting targeted extensions, we unlock every teraflop of Annapurna’s AI chips for both training and lightning-fast inference. Beyond kernels, we shape next-generation serving by upstreaming new features and driving scalable deployments with vLLM, Triton, and TensorRT—turning breakthrough ideas into production-ready AI for millions of customers. The ML Inference team collaborates closely with hardware designers, software optimization experts, and systems engineers to develop and optimize high-performance inference solutions for Inferentia chips. Proficiency in deploying and optimizing ML models for inference using frameworks like TensorFlow, PyTorch, and ONNX is essential. The team focuses on techniques such as quantization, pruning, and model compression to enhance inference speed and efficiency. Adapting and extending popular inference libraries and tools for Neuron-based systems is a key aspect of their work. Key job responsibilities
You'll join one of our core ML teams - Frameworks, Distributed Training, or Inference - to enhance machine learning capabilities on AWS\'s specialized AI hardware. Your responsibilities will include improving PyTorch and JAX for distributed training on Trainium chips, optimizing ML models for efficient inference on Inferentia processors, and collaborating with compiler and runtime teams to maximize hardware performance. You\'ll also develop and integrate new features in ML frameworks to support AWS AI services. We seek candidates with strong programming skills, eagerness to learn complex systems, and basic ML knowledge. This role offers growth opportunities in ML infrastructure, bridging the gap between frameworks, distributed systems, and hardware acceleration. About The Team
Annapurna Labs was a startup company acquired by AWS in 2015, and is now fully integrated. If AWS is an infrastructure company, then think Annapurna Labs as the infrastructure provider of AWS. Our org covers multiple disciplines including silicon engineering, hardware design and verification, software, and operations. AWS Neo tron, ENA, EFA, Graviton and F1 EC2 Instances, AWS Neuron, Inferentia and Trainium ML Accelerators, and in storage with scalable NVMe, are some of the products we have delivered, over the last few years. Basic Qualifications
To qualify, applicants should have earned (or will earn) a Bachelors or Masters degree between December 2022 and September 2025. Working knowledge of C++ and Python Experience with ML frameworks, particularly PyTorch, Jax, and/or vLLM Understanding of parallel computing concepts and CUDA programming Preferred Qualifications
Open source contributions to ML frameworks or tools Experience optimizing ML workloads for performance Direct experience with PyTorch internals or CUDA optimization Hands-on experience with LLM infrastructure tools (e.g., vLLM, TensorRT) Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $99,500/year in our lowest geographic market up to $200,000/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. This position will remain posted until filled. Applicants should apply via our internal or external career site. Company - Annapurna Labs (U.S.) Inc.
#J-18808-Ljbffr