Logo
Amazon Web Services (AWS)

Engineering Manager- Operating Systems Performance, AWS Neuron, Annapurna Labs

Amazon Web Services (AWS), Cupertino, California, United States, 95014

Save Job

Overview

The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon’s custom machine learning accelerators, Inferentia and Trainium. The Acceleration Kernel Library team focuses on maximizing performance for AWS's custom ML accelerators. Engineers design high-performance kernels for ML functions at the hardware-software boundary to ensure optimal performance for demanding workloads. We combine deep hardware knowledge with ML expertise to push the boundaries of AI acceleration. The AWS Neuron SDK, developed by the Annapurna Labs team at AWS, accelerates deep learning and GenAI workloads on Inferentia and Trainium. This toolkit includes an ML compiler, runtime, and application framework that integrates with popular ML frameworks like PyTorch to enable performance gains in ML inference and training. As part of the broader Neuron Compiler organization, the team works across frameworks, compilers, runtime, and collectives, optimizing performance and contributing to future architecture designs while collaborating with customers to enable their models and ensure optimal performance. This role offers a unique opportunity to work at the intersection of machine learning, high-performance computing, and distributed architectures. This role involves working on cutting-edge products at the intersection of machine learning, high-performance computing, and distributed architectures. You will architect and implement business-critical features, publish research, and mentor engineers. The team operates in a fast-moving environment with a focus on invention and experimentation, and collaborates closely with customers for model enablement and optimization on AWS ML accelerators. Explore the product and our history: https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-cc/index.html https://aws.amazon.com/machine-learning/neuron/ https://github.com/aws/aws-neuron-sdk https://www.amazon.science/how-silicon-innovation-became-the-secret-sauce-behind-awss-success Key job responsibilities

Role

Our kernel engineers collaborate across compiler, runtime, framework, and hardware teams to optimize machine learning workloads for our global customer base. You will work at the intersection of software, hardware, and ML systems with expertise in low-level optimization, system architecture, and ML model acceleration. In this role you will: Design and implement high-performance compute kernels for ML operations, leveraging the Neuron architecture and programming models Analyze and optimize kernel-level performance across multiple generations of Neuron hardware Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks Implement compiler optimizations such as fusion, sharding, tiling, and scheduling Work directly with customers to enable and optimize their ML models on AWS accelerators Collaborate across teams to develop innovative kernel optimization techniques A day in the life As you design and code solutions to help our team drive efficiencies in software architecture, you’ll create metrics, implement automation and other improvements, and resolve root causes of software defects. You’ll also build high-impact solutions for our large customer base, participate in design discussions and code reviews, and work cross-functionally to help drive business decisions with your technical input. You’ll operate in a startup-like development environment, always focusing on the most important work. About The Team #1. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and continue innovating to power customers ranging from startups to Global 500 companies. #2. Inclusive Team Culture AWS values inclusion with multiple employee-led affinity groups, learning experiences, and leadership principles emphasizing curiosity, trust, and diverse perspectives. #3. Work/Life Balance We value work-life balance and offer flexible hours to help you find your own balance between work and life. #4. Mentorship & Career Growth We support new members with mentorship and opportunities to take on progressively complex tasks. #5. Diverse Experiences We encourage applying even if you do not meet all listed qualifications and support varied career paths. Basic Qualifications

3+ years of engineering team management experience 7+ years of working directly within engineering teams 3+ years of designing or architecting systems (design patterns, reliability, scaling) 8+ years of leading the development of multi-tier web services Knowledge of full software/hardware/networks development life cycle practices Experience partnering with product or program management teams Preferred Qualifications

Experience communicating requirements and designs to users and senior leadership Experience recruiting, mentoring, and managing teams of software engineers Amazon is an equal opportunity employer and does not discriminate on the basis of protected status. Section regarding local fair chance laws is included where applicable. For workplace accommodations, visit the Amazon accommodations page. Our compensation reflects the cost of labor across US markets. The base pay range for this position is $166,400/year to $287,700/year, with possible equity, sign-on, and other benefits as part of a total compensation package. This position will remain posted until filled. Applicants should apply via our internal or external career site. Company

- Annapurna Labs (U.S.) Inc. - D63 Job ID: A3081296

#J-18808-Ljbffr