Amazon Web Services (AWS)
Senior Software Development Engineer - AI/ML, AWS Neuron, Multimodal Inference
Amazon Web Services (AWS), Cupertino, California, United States, 95014
Senior Software Development Engineer – AI/ML, AWS Neuron, Multimodal Inference
Posted 2 day ago. Be among the first 25 applicants.
Description The Annapurna Labs team at AWS builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon’s custom machine learning accelerators, Inferentia and Trainium. The Neuron SDK is the backbone for accelerating deep learning and GenAI workloads on these accelerators. It includes an ML compiler, runtime, and application framework that seamlessly integrates with popular ML frameworks like PyTorch and JAX, enabling unparalleled ML inference and training performance.
The Inference Enablement and Acceleration team works at the forefront of running a wide range of models and supporting novel architectures while maximizing performance on AWS’s custom ML accelerators. Working across the stack from PyTorch to the hardware‑software boundary, our engineers build systematic infrastructure, innovate new methods, and create high‑performance kernels for ML functions, ensuring every compute unit is fine‑tuned for optimal performance for our customers’ demanding workloads. We combine deep hardware knowledge with ML expertise to push the boundaries of what’s possible in AI acceleration.
As part of the broader Neuron organization, the team works across multiple technology layers—from frameworks and kernels and collaborating with compiler to runtime—to optimize current performance and contribute to future architecture designs. The role offers a unique opportunity to work at the intersection of machine learning, high‑performance computing, and distributed architectures, where you’ll help shape the future of AI acceleration technology.
Key Responsibilities
Design, develop, and optimize machine learning models and frameworks for deployment on custom ML hardware accelerators.
Participate in all stages of the ML system development lifecycle including distributed computing based architecture design, implementation, performance profiling, hardware‑specific optimizations, testing and production deployment.
Build infrastructure to systematically analyze and onboard multiple models with diverse architecture.
Design and implement high‑performance kernels and features for ML operations, leveraging the Neuron architecture and programming models.
Analyze and optimize system‑level performance across multiple generations of Neuron hardware.
Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks.
Implement optimizations such as fusion, sharding, tiling, and scheduling.
Conduct comprehensive testing, including unit and end‑to‑end model testing with continuous deployment and releases through pipelines.
Work directly with customers to enable and optimize their ML models on AWS accelerators.
Collaborate across teams to develop innovative optimization techniques.
A Day in the Life You will collaborate with a cross‑functional team of applied scientists, system engineers, and product managers to deliver state‑of‑the‑art inference capabilities for generative AI applications. Your work will involve debugging performance issues, optimizing memory usage, and shaping the future of Neuron’s inference stack across Amazon and the open source community. You’ll build high‑impact solutions, participate in design discussions, code reviews, and communicate with internal and external stakeholders in a startup‑like environment.
About The Team The Inference Enablement and Acceleration team fosters a builder’s culture where experimentation is encouraged. Collaboration, technical ownership, and continuous learning are valued. Our senior members provide one‑on‑one mentoring and thorough, but kind, code reviews.
Basic Qualifications
5+ years of non‑internship professional software development experience.
Bachelor’s degree or equivalent in Computer Science.
5+ years of non‑internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience.
Fundamentals of Machine learning and LLMs, their architecture, training and inference lifecycles along with experience on optimizations for improving model execution.
Software development experience in C++ or Python (experience in at least one language is required).
Strong understanding of system performance, memory management, and parallel computing principles.
Proficiency in debugging, profiling, and implementing best software engineering practices in large‑scale systems.
Preferred Qualifications
Familiarity with PyTorch, JIT compilation, and AOT tracing.
Familiarity with CUDA kernels or equivalent ML or low‑level kernels.
Experience with performant kernel development such as CUTLASS, FlashInfer etc., would be well suited.
Familiar with syntax and tile‑level semantics similar to Triton.
Experience with online/offline inference serving with vLLM, SGLang, TensorRT or similar platforms in production environments.
Deep understanding of computer architecture, operating systems level software and working knowledge of parallel computing.
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Los Angeles County applicants: Job duties for this position include: work safely and cooperatively with other employees, supervisors, and staff; adhere to standards of excellence despite stressful conditions; communicate effectively and respectfully with employees, supervisors, and staff to ensure exceptional customer service; and follow all federal, state, and local laws and Company policies. Criminal history may have a direct, adverse, and negative relationship with some of the material job duties of this position. These include the duties and responsibilities listed above, as well as the abilities to adhere to company policies, exercise sound judgment, effectively manage stress and work safely and respectfully with others, exhibit trustworthiness and professionalism, and safeguard business operations and the Company’s reputation. Pursuant to the Los Angeles County Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Salary: Base pay for this position ranges from $151,300 per year in our lowest geographic market up to $261,500 per year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job‑related knowledge, skills, and experience. Amazon is a total compensation company. Depending on the position offered, equity, sign‑on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and other benefits.
Location: Mountain View, CA
#J-18808-Ljbffr
Description The Annapurna Labs team at AWS builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon’s custom machine learning accelerators, Inferentia and Trainium. The Neuron SDK is the backbone for accelerating deep learning and GenAI workloads on these accelerators. It includes an ML compiler, runtime, and application framework that seamlessly integrates with popular ML frameworks like PyTorch and JAX, enabling unparalleled ML inference and training performance.
The Inference Enablement and Acceleration team works at the forefront of running a wide range of models and supporting novel architectures while maximizing performance on AWS’s custom ML accelerators. Working across the stack from PyTorch to the hardware‑software boundary, our engineers build systematic infrastructure, innovate new methods, and create high‑performance kernels for ML functions, ensuring every compute unit is fine‑tuned for optimal performance for our customers’ demanding workloads. We combine deep hardware knowledge with ML expertise to push the boundaries of what’s possible in AI acceleration.
As part of the broader Neuron organization, the team works across multiple technology layers—from frameworks and kernels and collaborating with compiler to runtime—to optimize current performance and contribute to future architecture designs. The role offers a unique opportunity to work at the intersection of machine learning, high‑performance computing, and distributed architectures, where you’ll help shape the future of AI acceleration technology.
Key Responsibilities
Design, develop, and optimize machine learning models and frameworks for deployment on custom ML hardware accelerators.
Participate in all stages of the ML system development lifecycle including distributed computing based architecture design, implementation, performance profiling, hardware‑specific optimizations, testing and production deployment.
Build infrastructure to systematically analyze and onboard multiple models with diverse architecture.
Design and implement high‑performance kernels and features for ML operations, leveraging the Neuron architecture and programming models.
Analyze and optimize system‑level performance across multiple generations of Neuron hardware.
Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks.
Implement optimizations such as fusion, sharding, tiling, and scheduling.
Conduct comprehensive testing, including unit and end‑to‑end model testing with continuous deployment and releases through pipelines.
Work directly with customers to enable and optimize their ML models on AWS accelerators.
Collaborate across teams to develop innovative optimization techniques.
A Day in the Life You will collaborate with a cross‑functional team of applied scientists, system engineers, and product managers to deliver state‑of‑the‑art inference capabilities for generative AI applications. Your work will involve debugging performance issues, optimizing memory usage, and shaping the future of Neuron’s inference stack across Amazon and the open source community. You’ll build high‑impact solutions, participate in design discussions, code reviews, and communicate with internal and external stakeholders in a startup‑like environment.
About The Team The Inference Enablement and Acceleration team fosters a builder’s culture where experimentation is encouraged. Collaboration, technical ownership, and continuous learning are valued. Our senior members provide one‑on‑one mentoring and thorough, but kind, code reviews.
Basic Qualifications
5+ years of non‑internship professional software development experience.
Bachelor’s degree or equivalent in Computer Science.
5+ years of non‑internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience.
Fundamentals of Machine learning and LLMs, their architecture, training and inference lifecycles along with experience on optimizations for improving model execution.
Software development experience in C++ or Python (experience in at least one language is required).
Strong understanding of system performance, memory management, and parallel computing principles.
Proficiency in debugging, profiling, and implementing best software engineering practices in large‑scale systems.
Preferred Qualifications
Familiarity with PyTorch, JIT compilation, and AOT tracing.
Familiarity with CUDA kernels or equivalent ML or low‑level kernels.
Experience with performant kernel development such as CUTLASS, FlashInfer etc., would be well suited.
Familiar with syntax and tile‑level semantics similar to Triton.
Experience with online/offline inference serving with vLLM, SGLang, TensorRT or similar platforms in production environments.
Deep understanding of computer architecture, operating systems level software and working knowledge of parallel computing.
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Los Angeles County applicants: Job duties for this position include: work safely and cooperatively with other employees, supervisors, and staff; adhere to standards of excellence despite stressful conditions; communicate effectively and respectfully with employees, supervisors, and staff to ensure exceptional customer service; and follow all federal, state, and local laws and Company policies. Criminal history may have a direct, adverse, and negative relationship with some of the material job duties of this position. These include the duties and responsibilities listed above, as well as the abilities to adhere to company policies, exercise sound judgment, effectively manage stress and work safely and respectfully with others, exhibit trustworthiness and professionalism, and safeguard business operations and the Company’s reputation. Pursuant to the Los Angeles County Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Salary: Base pay for this position ranges from $151,300 per year in our lowest geographic market up to $261,500 per year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job‑related knowledge, skills, and experience. Amazon is a total compensation company. Depending on the position offered, equity, sign‑on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and other benefits.
Location: Mountain View, CA
#J-18808-Ljbffr