Amazon Web Services (AWS)
Senior Software Development Engineer - AI/ML, AWS Neuron, Multimodal Inference
Amazon Web Services (AWS), Seattle, Washington, us, 98127
Senior Software Development Engineer – AI/ML, AWS Neuron, Multimodal Inference
Join the Annapurna Labs team at Amazon Web Services (AWS) to build AWS Neuron, the SDK that accelerates deep learning and GenAI workloads on Amazon’s custom ML accelerators, Inferentia and Trainium.
As part of the Inference Enablement and Acceleration team, you’ll work across the stack from PyTorch and JAX up to the hardware-software boundary, designing and implementing high-performance kernels and distributed inference solutions for large language models such as the Llama family and DeepSeek.
Key job responsibilities
Design, develop, and optimize machine-learning models and frameworks for deployment on custom ML hardware accelerators.
Participate in all stages of the ML system development lifecycle, including architecture design, implementation, performance profiling, hardware-specific optimizations, testing and production deployment.
Build infrastructure to systematically analyze and onboard multiple models with diverse architectures.
Design and implement high-performance kernels and features for ML operations, leveraging the Neuron architecture and programming models.
Analyze and optimize system-level performance across multiple generations of Neuron hardware.
Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks.
Implement optimizations such as fusion, sharding, tiling, and scheduling.
Conduct comprehensive testing, including unit and end-to-end model testing with continuous deployment through pipelines.
Work directly with customers to enable and optimize their ML models on AWS accelerators.
Collaborate across teams to develop innovative optimization techniques.
A day in the life You will collaborate with a cross-functional team of applied scientists, system engineers, and product managers to deliver state‑of-the-art inference capabilities for generative AI applications. Your work will involve debugging performance issues, optimizing memory usage, and shaping the future of Neuron’s inference stack across Amazon and the Open Source Community. You will also build high-impact solutions for our large customer base, participate in design discussions, code review, and communicate with internal and external stakeholders in a fast-paced, startup-like environment.
About The Team The Inference Enablement and Acceleration team fosters a builder’s culture where experimentation is encouraged and impact is measurable. We emphasize collaboration, technical ownership, and continuous learning. Our senior members provide mentoring and code reviews, and we celebrate knowledge-sharing and mentorship across all experience levels.
Basic Qualifications
5+ years of non-internship professional software development experience.
Bachelor’s degree or equivalent in Computer Science.
5+ years of non-internship design or architecture experience of new and existing systems.
Fundamentals of machine learning and LLMs, including architecture, training and inference lifecycles, and optimization experience.
Software development experience in C++ and Python (experience in at least one language is required).
Strong understanding of system performance, memory management, and parallel computing principles.
Proficiency in debugging, profiling, and implementing best software engineering practices in large-scale systems.
Preferred Qualifications
Familiarity with PyTorch, JIT compilation, and AOT tracing.
Familiarity with CUDA kernels or equivalent ML or low-level kernels.
Experience with performant kernel development such as CUTLASS, FlashInfer, etc.
Familiarity with syntax and tile-level semantics similar to Triton.
Experience with online/offline inference serving with vLLM, SGLang, TensorRT or similar platforms in production environments.
Deep understanding of computer architecture, operating system level software, and parallel computing.
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. This position will remain posted until filled. Applicants should apply via our internal or external career site.
Job ID: A3139828
#J-18808-Ljbffr
As part of the Inference Enablement and Acceleration team, you’ll work across the stack from PyTorch and JAX up to the hardware-software boundary, designing and implementing high-performance kernels and distributed inference solutions for large language models such as the Llama family and DeepSeek.
Key job responsibilities
Design, develop, and optimize machine-learning models and frameworks for deployment on custom ML hardware accelerators.
Participate in all stages of the ML system development lifecycle, including architecture design, implementation, performance profiling, hardware-specific optimizations, testing and production deployment.
Build infrastructure to systematically analyze and onboard multiple models with diverse architectures.
Design and implement high-performance kernels and features for ML operations, leveraging the Neuron architecture and programming models.
Analyze and optimize system-level performance across multiple generations of Neuron hardware.
Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks.
Implement optimizations such as fusion, sharding, tiling, and scheduling.
Conduct comprehensive testing, including unit and end-to-end model testing with continuous deployment through pipelines.
Work directly with customers to enable and optimize their ML models on AWS accelerators.
Collaborate across teams to develop innovative optimization techniques.
A day in the life You will collaborate with a cross-functional team of applied scientists, system engineers, and product managers to deliver state‑of-the-art inference capabilities for generative AI applications. Your work will involve debugging performance issues, optimizing memory usage, and shaping the future of Neuron’s inference stack across Amazon and the Open Source Community. You will also build high-impact solutions for our large customer base, participate in design discussions, code review, and communicate with internal and external stakeholders in a fast-paced, startup-like environment.
About The Team The Inference Enablement and Acceleration team fosters a builder’s culture where experimentation is encouraged and impact is measurable. We emphasize collaboration, technical ownership, and continuous learning. Our senior members provide mentoring and code reviews, and we celebrate knowledge-sharing and mentorship across all experience levels.
Basic Qualifications
5+ years of non-internship professional software development experience.
Bachelor’s degree or equivalent in Computer Science.
5+ years of non-internship design or architecture experience of new and existing systems.
Fundamentals of machine learning and LLMs, including architecture, training and inference lifecycles, and optimization experience.
Software development experience in C++ and Python (experience in at least one language is required).
Strong understanding of system performance, memory management, and parallel computing principles.
Proficiency in debugging, profiling, and implementing best software engineering practices in large-scale systems.
Preferred Qualifications
Familiarity with PyTorch, JIT compilation, and AOT tracing.
Familiarity with CUDA kernels or equivalent ML or low-level kernels.
Experience with performant kernel development such as CUTLASS, FlashInfer, etc.
Familiarity with syntax and tile-level semantics similar to Triton.
Experience with online/offline inference serving with vLLM, SGLang, TensorRT or similar platforms in production environments.
Deep understanding of computer architecture, operating system level software, and parallel computing.
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. This position will remain posted until filled. Applicants should apply via our internal or external career site.
Job ID: A3139828
#J-18808-Ljbffr