Amazon Web Services (AWS)
ML Acceleration / Framework Engineer - Distributed Training & Inference, AWS Neu
Amazon Web Services (AWS), Cupertino, California, United States, 95014
Join to apply for the
ML Acceleration / Framework Engineer - Distributed Training & Inference, AWS Neuron, Annapurna Labs, Annapurna Labs
role at
Amazon Web Services (AWS) Join to apply for the
ML Acceleration / Framework Engineer - Distributed Training & Inference, AWS Neuron, Annapurna Labs, Annapurna Labs
role at
Amazon Web Services (AWS) Get AI-powered advice on this job and more exclusive features. Description
By applying to this position, your application will be considered for all locations we hire for in the United States. Description
By applying to this position, your application will be considered for all locations we hire for in the United States.
Annapurna Labs designs silicon and software that accelerates innovation. Customers choose us to create cloud solutions that solve challenges that were unimaginable a short time ago—even yesterday. Our custom chips, accelerators, and software stacks enable us to take on technical challenges that have never been seen before, and deliver results that help our customers change the world.
Role
AWS Neuron is the complete software stack for the AWS Trainium (Trn1/Trn2) and Inferentia (Inf1/Inf2) our cloud-scale Machine Learning accelerators. This role is for a Machine Learning Engineer on one of our AWS Neuron teams:
The ML Distributed Training team works side by side with chip architects, compiler engineers and runtime engineers to create, build and tune distributed training solutions with Trainium instances. Experience with training these large models using Python is a must. FSDP (Fully-Sharded Data Parallel), Deepspeed, Nemo and other distributed training libraries are central to this and extending all of this for the Neuron based system is key. MLFrameworks partners with compiler, runtime, and research experts to make AWSTrainium andInferentia feel native inside the tools builders already love—PyTorch, JAX, and the rapidly evolving vLLM ecosystem. By weaving NeuronSDK deep into these frameworks, optimizing operators, and crafting targeted extensions, we unlock every teraflop of Annapurna’s AI chips for both training and lightning‑fast inference. Beyond kernels, we shape next‑generation serving by upstreaming new features and driving scalable deployments with vLLM, Triton, and TensorRT—turning breakthrough ideas into production‑ready AI for millions of customers. The ML Inference team collaborates closely with hardware designers, software optimization experts, and systems engineers to develop and optimize high-performance inference solutions for Inferentia chips. Proficiency in deploying and optimizing ML models for inference using frameworks like TensorFlow, PyTorch, and ONNX is essential. The team focuses on techniques such as quantization, pruning, and model compression to enhance inference speed and efficiency. Adapting and extending popular inference libraries and tools for Neuron-based systems is a key aspect of their work.
Key job responsibilities
You'll join one of our core ML teams - Frameworks, Distributed Training, or Inference - to enhance machine learning capabilities on AWS's specialized AI hardware. Your responsibilities will include improving PyTorch and JAX for distributed training on Trainium chips, optimizing ML models for efficient inference on Inferentia processors, and collaborating with compiler and runtime teams to maximize hardware performance. You'll also develop and integrate new features in ML frameworks to support AWS AI services. We seek candidates with strong programming skills, eagerness to learn complex systems, and basic ML knowledge. This role offers growth opportunities in ML infrastructure, bridging the gap between frameworks, distributed systems, and hardware acceleration.
About The Team
Annapurna Labs was a startup company acquired by AWS in 2015, and is now fully integrated. If AWS is an infrastructure company, then think Annapurna Labs as the infrastructure provider of AWS. Our org covers multiple disciplines including silicon engineering, hardware design and verification, software, and operations. AWS Nitro, ENA, EFA, Graviton and F1 EC2 Instances, AWS Neuron, Inferentia and Trainium ML Accelerators, and in storage with scalable NVMe, are some of the products we have delivered, over the last few years.
Basic Qualifications
To qualify, applicants should have earned (or will earn) a Bachelors or Masters degree between December 2022 and September 2025. Working knowledge of C++ and Python Experience with ML frameworks, particularly PyTorch, Jax, and/or vLLM Understanding of parallel computing concepts and CUDA programming
Preferred Qualifications
Open source contributions to ML frameworks or tools Experience optimizing ML workloads for performance Direct experience with PyTorch internals or CUDA optimization Hands-on experience with LLM infrastructure tools (e.g., vLLM, TensorRT)
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Los Angeles County applicants: Job duties for this position include: work safely and cooperatively with other employees, supervisors, and staff; adhere to standards of excellence despite stressful conditions; communicate effectively and respectfully with employees, supervisors, and staff to ensure exceptional customer service; and follow all federal, state, and local laws and Company policies. Criminal history may have a direct, adverse, and negative relationship with some of the material job duties of this position. These include the duties and responsibilities listed above, as well as the abilities to adhere to company policies, exercise sound judgment, effectively manage stress and work safely and respectfully with others, exhibit trustworthiness and professionalism, and safeguard business operations and the Company’s reputation. Pursuant to the Los Angeles County Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $99,500/year in our lowest geographic market up to $200,000/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits. This position will remain posted until filled. Applicants should apply via our internal or external career site.
Company
- Annapurna Labs (U.S.) Inc.
Job ID: A2956023 Seniority level
Seniority level Not Applicable Employment type
Employment type Full-time Job function
Job function Information Technology, Consulting, and Engineering Industries IT Services and IT Consulting Referrals increase your chances of interviewing at Amazon Web Services (AWS) by 2x Get notified about new Infrastructure Engineer jobs in
Cupertino, CA . Fall 2025 Onboard Infrastructure Engineer
San Jose, CA $113,600.00-$170,400.00 2 weeks ago San Jose, CA $84,000.00-$134,000.00 1 week ago Palo Alto, CA $144,000.00-$216,000.00 2 weeks ago San Jose, CA $84,000.00-$134,000.00 1 month ago San Jose, CA $130,000.00-$182,000.00 5 months ago San Jose, CA $82,000.00-$133,000.00 2 months ago San Jose, CA $82,000.00-$133,000.00 2 weeks ago Fremont, CA $70,000.00-$100,000.00 2 weeks ago Sunnyvale, CA $204,000.00-$247,000.00 1 month ago Hayward, CA $100,000.00-$150,000.00 6 months ago San Jose, CA $123,500.00-$212,850.00 1 week ago Sunnyvale, CA $168,000.00-$276,000.00 7 hours ago Sunnyvale, CA $166,000.00-$201,000.00 2 months ago San Jose, CA $123,500.00-$212,850.00 1 week ago San Mateo, CA $157,000.00-$171,500.00 1 month ago Data Infrastructure Engineer, Google Fi and Store
Mountain View, CA $166,000.00-$244,000.00 5 days ago Palo Alto, CA $149,500.00-$184,000.00 2 weeks ago San Jose, CA $180,000.00-$220,000.00 2 weeks ago San Jose, CA $123,500.00-$212,850.00 1 week ago San Jose, CA $84,000.00-$134,000.00 6 days ago We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-Ljbffr
ML Acceleration / Framework Engineer - Distributed Training & Inference, AWS Neuron, Annapurna Labs, Annapurna Labs
role at
Amazon Web Services (AWS) Join to apply for the
ML Acceleration / Framework Engineer - Distributed Training & Inference, AWS Neuron, Annapurna Labs, Annapurna Labs
role at
Amazon Web Services (AWS) Get AI-powered advice on this job and more exclusive features. Description
By applying to this position, your application will be considered for all locations we hire for in the United States. Description
By applying to this position, your application will be considered for all locations we hire for in the United States.
Annapurna Labs designs silicon and software that accelerates innovation. Customers choose us to create cloud solutions that solve challenges that were unimaginable a short time ago—even yesterday. Our custom chips, accelerators, and software stacks enable us to take on technical challenges that have never been seen before, and deliver results that help our customers change the world.
Role
AWS Neuron is the complete software stack for the AWS Trainium (Trn1/Trn2) and Inferentia (Inf1/Inf2) our cloud-scale Machine Learning accelerators. This role is for a Machine Learning Engineer on one of our AWS Neuron teams:
The ML Distributed Training team works side by side with chip architects, compiler engineers and runtime engineers to create, build and tune distributed training solutions with Trainium instances. Experience with training these large models using Python is a must. FSDP (Fully-Sharded Data Parallel), Deepspeed, Nemo and other distributed training libraries are central to this and extending all of this for the Neuron based system is key. MLFrameworks partners with compiler, runtime, and research experts to make AWSTrainium andInferentia feel native inside the tools builders already love—PyTorch, JAX, and the rapidly evolving vLLM ecosystem. By weaving NeuronSDK deep into these frameworks, optimizing operators, and crafting targeted extensions, we unlock every teraflop of Annapurna’s AI chips for both training and lightning‑fast inference. Beyond kernels, we shape next‑generation serving by upstreaming new features and driving scalable deployments with vLLM, Triton, and TensorRT—turning breakthrough ideas into production‑ready AI for millions of customers. The ML Inference team collaborates closely with hardware designers, software optimization experts, and systems engineers to develop and optimize high-performance inference solutions for Inferentia chips. Proficiency in deploying and optimizing ML models for inference using frameworks like TensorFlow, PyTorch, and ONNX is essential. The team focuses on techniques such as quantization, pruning, and model compression to enhance inference speed and efficiency. Adapting and extending popular inference libraries and tools for Neuron-based systems is a key aspect of their work.
Key job responsibilities
You'll join one of our core ML teams - Frameworks, Distributed Training, or Inference - to enhance machine learning capabilities on AWS's specialized AI hardware. Your responsibilities will include improving PyTorch and JAX for distributed training on Trainium chips, optimizing ML models for efficient inference on Inferentia processors, and collaborating with compiler and runtime teams to maximize hardware performance. You'll also develop and integrate new features in ML frameworks to support AWS AI services. We seek candidates with strong programming skills, eagerness to learn complex systems, and basic ML knowledge. This role offers growth opportunities in ML infrastructure, bridging the gap between frameworks, distributed systems, and hardware acceleration.
About The Team
Annapurna Labs was a startup company acquired by AWS in 2015, and is now fully integrated. If AWS is an infrastructure company, then think Annapurna Labs as the infrastructure provider of AWS. Our org covers multiple disciplines including silicon engineering, hardware design and verification, software, and operations. AWS Nitro, ENA, EFA, Graviton and F1 EC2 Instances, AWS Neuron, Inferentia and Trainium ML Accelerators, and in storage with scalable NVMe, are some of the products we have delivered, over the last few years.
Basic Qualifications
To qualify, applicants should have earned (or will earn) a Bachelors or Masters degree between December 2022 and September 2025. Working knowledge of C++ and Python Experience with ML frameworks, particularly PyTorch, Jax, and/or vLLM Understanding of parallel computing concepts and CUDA programming
Preferred Qualifications
Open source contributions to ML frameworks or tools Experience optimizing ML workloads for performance Direct experience with PyTorch internals or CUDA optimization Hands-on experience with LLM infrastructure tools (e.g., vLLM, TensorRT)
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Los Angeles County applicants: Job duties for this position include: work safely and cooperatively with other employees, supervisors, and staff; adhere to standards of excellence despite stressful conditions; communicate effectively and respectfully with employees, supervisors, and staff to ensure exceptional customer service; and follow all federal, state, and local laws and Company policies. Criminal history may have a direct, adverse, and negative relationship with some of the material job duties of this position. These include the duties and responsibilities listed above, as well as the abilities to adhere to company policies, exercise sound judgment, effectively manage stress and work safely and respectfully with others, exhibit trustworthiness and professionalism, and safeguard business operations and the Company’s reputation. Pursuant to the Los Angeles County Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $99,500/year in our lowest geographic market up to $200,000/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits. This position will remain posted until filled. Applicants should apply via our internal or external career site.
Company
- Annapurna Labs (U.S.) Inc.
Job ID: A2956023 Seniority level
Seniority level Not Applicable Employment type
Employment type Full-time Job function
Job function Information Technology, Consulting, and Engineering Industries IT Services and IT Consulting Referrals increase your chances of interviewing at Amazon Web Services (AWS) by 2x Get notified about new Infrastructure Engineer jobs in
Cupertino, CA . Fall 2025 Onboard Infrastructure Engineer
San Jose, CA $113,600.00-$170,400.00 2 weeks ago San Jose, CA $84,000.00-$134,000.00 1 week ago Palo Alto, CA $144,000.00-$216,000.00 2 weeks ago San Jose, CA $84,000.00-$134,000.00 1 month ago San Jose, CA $130,000.00-$182,000.00 5 months ago San Jose, CA $82,000.00-$133,000.00 2 months ago San Jose, CA $82,000.00-$133,000.00 2 weeks ago Fremont, CA $70,000.00-$100,000.00 2 weeks ago Sunnyvale, CA $204,000.00-$247,000.00 1 month ago Hayward, CA $100,000.00-$150,000.00 6 months ago San Jose, CA $123,500.00-$212,850.00 1 week ago Sunnyvale, CA $168,000.00-$276,000.00 7 hours ago Sunnyvale, CA $166,000.00-$201,000.00 2 months ago San Jose, CA $123,500.00-$212,850.00 1 week ago San Mateo, CA $157,000.00-$171,500.00 1 month ago Data Infrastructure Engineer, Google Fi and Store
Mountain View, CA $166,000.00-$244,000.00 5 days ago Palo Alto, CA $149,500.00-$184,000.00 2 weeks ago San Jose, CA $180,000.00-$220,000.00 2 weeks ago San Jose, CA $123,500.00-$212,850.00 1 week ago San Jose, CA $84,000.00-$134,000.00 6 days ago We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-Ljbffr