ZipRecruiter
Software Engineering Manager, ML Kernel Performance, AWS Neuron, Annapurna Labs
ZipRecruiter, Cupertino, California, United States, 95014
Overview
The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon’s custom machine learning accelerators, Inferentia and Trainium. The Acceleration Kernel Library team optimizes performance for AWS's custom ML accelerators, crafting high-performance kernels for ML functions at the hardware-software boundary. The AWS Neuron SDK includes an ML compiler, runtime, and application framework that integrates with popular ML frameworks like PyTorch to enable accelerated inference and training performance. As part of the broader Neuron Compiler organization, the team works across multiple technology layers—from frameworks and compilers to runtime and collectives. The role involves optimizing current performance and contributing to future architecture designs, working closely with customers to enable models and ensure optimal performance. This is a unique opportunity to work at the intersection of machine learning, high-performance computing, and distributed architectures, shaping the future of AI acceleration technology. This role offers the chance to work on cutting-edge products at the intersection of machine-learning, high-performance computing, and distributed architectures. You will architect and implement business-critical features, publish cutting-edge research, and mentor a team of experienced engineers in a startup-like environment where experimentation drives learning and innovation. The team works closely with customers on model enablement, providing direct support and optimization expertise to ensure machine learning workloads achieve optimal performance on AWS ML accelerators. Responsibilities
Design and implement high-performance compute kernels for ML operations, leveraging the Neuron architecture and programming models Analyze and optimize kernel-level performance across multiple Neuron hardware Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks Implement compiler optimizations such as fusion, sharding, tiling, and scheduling Work directly with customers to enable and optimize their ML models on AWS accelerators Collaborate across teams to develop innovative kernel optimization techniques Qualifications
3+ years of engineering team management experience 7+ years of working directly within engineering teams experience 3+ years of designing or architecting (design patterns, reliability and scaling) of new and existing systems experience 8+ years of leading the definition and development of multi-tier web services experience Knowledge of engineering practices and patterns for the full software/hardware/networks development life cycle, including coding standards, code reviews, source control management, build processes, testing, certification, and livesite operations Experience partnering with product or program management teams Experience in communicating with users, other technical teams, and senior leadership to collect requirements, describe software product features, technical designs, and product strategy Experience in recruiting, hiring, mentoring/coaching and managing teams of Software Engineers Equal Opportunity : Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, or other legally protected status.
#J-18808-Ljbffr
The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon’s custom machine learning accelerators, Inferentia and Trainium. The Acceleration Kernel Library team optimizes performance for AWS's custom ML accelerators, crafting high-performance kernels for ML functions at the hardware-software boundary. The AWS Neuron SDK includes an ML compiler, runtime, and application framework that integrates with popular ML frameworks like PyTorch to enable accelerated inference and training performance. As part of the broader Neuron Compiler organization, the team works across multiple technology layers—from frameworks and compilers to runtime and collectives. The role involves optimizing current performance and contributing to future architecture designs, working closely with customers to enable models and ensure optimal performance. This is a unique opportunity to work at the intersection of machine learning, high-performance computing, and distributed architectures, shaping the future of AI acceleration technology. This role offers the chance to work on cutting-edge products at the intersection of machine-learning, high-performance computing, and distributed architectures. You will architect and implement business-critical features, publish cutting-edge research, and mentor a team of experienced engineers in a startup-like environment where experimentation drives learning and innovation. The team works closely with customers on model enablement, providing direct support and optimization expertise to ensure machine learning workloads achieve optimal performance on AWS ML accelerators. Responsibilities
Design and implement high-performance compute kernels for ML operations, leveraging the Neuron architecture and programming models Analyze and optimize kernel-level performance across multiple Neuron hardware Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks Implement compiler optimizations such as fusion, sharding, tiling, and scheduling Work directly with customers to enable and optimize their ML models on AWS accelerators Collaborate across teams to develop innovative kernel optimization techniques Qualifications
3+ years of engineering team management experience 7+ years of working directly within engineering teams experience 3+ years of designing or architecting (design patterns, reliability and scaling) of new and existing systems experience 8+ years of leading the definition and development of multi-tier web services experience Knowledge of engineering practices and patterns for the full software/hardware/networks development life cycle, including coding standards, code reviews, source control management, build processes, testing, certification, and livesite operations Experience partnering with product or program management teams Experience in communicating with users, other technical teams, and senior leadership to collect requirements, describe software product features, technical designs, and product strategy Experience in recruiting, hiring, mentoring/coaching and managing teams of Software Engineers Equal Opportunity : Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, or other legally protected status.
#J-18808-Ljbffr