Amazon Web Services (AWS)
Senior Software Development Engineer, AI/ML, AWS Neuron, Model Inference
Amazon Web Services (AWS), Seattle, Washington, us, 98127
Senior Software Development Engineer, AI/ML, AWS Neuron, Model Inference
4 days ago Be among the first 25 applicants
Overview AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machine learning accelerators and servers that use them. This role is for a software engineer in the Machine Learning Inference Model Enablement and Generality team for AWS Neuron at Annapurna Labs.
This role is responsible for development, enablement and performance tuning of a wide variety of LLM model families, including massive scale large language models like the Llama family, DeepSeek and beyond, as well as stable diffusion, vision transformers and many more. The Inference Model Enablement and Generality team works side by side with compiler engineers and runtime engineers to create, build and tune distributed inference solutions with Trainium and Inferentia. Experience optimizing LLM inference performance for both latency and throughput is highly desired. Experience with distributed inference libraries such as vLLM is a bonus.
Responsibilities
This role will help lead the efforts building distributed inference support for PyTorch in the Neuron SDK. This role will tune these models to ensure highest performance and maximize the efficiency of them running on the customer AWS Trainium and Inferentia silicon and servers.
Strong software development using Python and ML knowledge are both critical to this role.
A day in the life: design and code solutions to drive efficiencies in software architecture, create metrics, implement automation and other improvements, and resolve root causes of software defects. Build high-impact solutions to deliver to our large customer base. Participate in design discussions, code reviews, and communicate with internal and external stakeholders. Work cross-functionally to influence business decisions with technical input. Work in a startup-like development environment.
About The Team Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop engineering expertise.
Basic Qualifications
5+ years of non-internship professional software development experience
5+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience
Fundamentals of Machine learning and LLMs, their architecture, training and inference lifecycles along with work experience on some optimizations for improving the model execution.
Experience programming with at least one software programming language
Preferred Qualifications
5+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
Masters degree in computer science or equivalent
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. If you need a workplace accommodation during the application process, visit the Amazon accommodations page. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay ranges from $151,300/year to $261,500/year, depending on location and experience. Amazon is a total compensation company. Equity, sign-on, and other benefits may be provided as part of a total compensation package. For more information, please visit Amazon benefits.
This position will remain posted until filled. Applicants should apply via our internal or external career site.
Company - Annapurna Labs (U.S.) Inc. Job ID: A3055172
#J-18808-Ljbffr
Overview AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machine learning accelerators and servers that use them. This role is for a software engineer in the Machine Learning Inference Model Enablement and Generality team for AWS Neuron at Annapurna Labs.
This role is responsible for development, enablement and performance tuning of a wide variety of LLM model families, including massive scale large language models like the Llama family, DeepSeek and beyond, as well as stable diffusion, vision transformers and many more. The Inference Model Enablement and Generality team works side by side with compiler engineers and runtime engineers to create, build and tune distributed inference solutions with Trainium and Inferentia. Experience optimizing LLM inference performance for both latency and throughput is highly desired. Experience with distributed inference libraries such as vLLM is a bonus.
Responsibilities
This role will help lead the efforts building distributed inference support for PyTorch in the Neuron SDK. This role will tune these models to ensure highest performance and maximize the efficiency of them running on the customer AWS Trainium and Inferentia silicon and servers.
Strong software development using Python and ML knowledge are both critical to this role.
A day in the life: design and code solutions to drive efficiencies in software architecture, create metrics, implement automation and other improvements, and resolve root causes of software defects. Build high-impact solutions to deliver to our large customer base. Participate in design discussions, code reviews, and communicate with internal and external stakeholders. Work cross-functionally to influence business decisions with technical input. Work in a startup-like development environment.
About The Team Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop engineering expertise.
Basic Qualifications
5+ years of non-internship professional software development experience
5+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience
Fundamentals of Machine learning and LLMs, their architecture, training and inference lifecycles along with work experience on some optimizations for improving the model execution.
Experience programming with at least one software programming language
Preferred Qualifications
5+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
Masters degree in computer science or equivalent
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. If you need a workplace accommodation during the application process, visit the Amazon accommodations page. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay ranges from $151,300/year to $261,500/year, depending on location and experience. Amazon is a total compensation company. Equity, sign-on, and other benefits may be provided as part of a total compensation package. For more information, please visit Amazon benefits.
This position will remain posted until filled. Applicants should apply via our internal or external career site.
Company - Annapurna Labs (U.S.) Inc. Job ID: A3055172
#J-18808-Ljbffr