Logo
Energy Jobline ZR

Senior Software Development Engineer - AI/ML, AWS Neuron, Multimodal Inference

Energy Jobline ZR, Seattle, Washington, us, 98127

Save Job

The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon’s custom machine learning accelerators, Inferentia and Trainium. The AWS Neuron SDK is the backbone for accelerating deep learning and GenAI workloads on Amazon's Inferentia and Trainium ML accelerators. This comprehensive toolkit includes an ML compiler, runtime, and application framework that seamlessly integrates with popular ML frameworks like PyTorch and JAX, enabling unparalleled ML inference and training performance.

The Inference Enablement and Acceleration team is at the forefront of running a wide range of models and supporting novel architecture while maximizing performance for AWS’s custom ML accelerators. Working across the stack from PyTorch to the hardware‑software boundary, our engineers build systematic infrastructure, innovate new methods, and create high‑performance kernels for ML functions to fine‑tune every compute unit for optimal performance.

As part of the broader Neuron organization, the team works across multiple technology layers—frameworks, kernels, compiler, runtime, and collectives. They optimize current performance and contribute to future architecture designs, working closely with customers to enable their models and ensure optimal performance.

Key job responsibilities

Design, develop, and optimize machine learning models and frameworks for deployment on custom ML hardware accelerators.

Participate in all stages of the ML system development lifecycle, including distributed computing‑based architecture design, implementation, performance profiling, hardware‑specific optimizations, testing, and production deployment.

Build infrastructure to systematically analyze and onboard multiple models with diverse architecture.

Design and implement high‑performance kernels and features for ML operations, leveraging the Neuron architecture and programming models.

Analyze and optimize system‑level performance across multiple Neuron hardware platforms.

Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks.

Implement optimizations such as fusion, sharding, tiling, and scheduling.

Conduct comprehensive testing, including unit and end‑to‑end model testing with continuous deployment and releases through pipelines.

Work directly with customers to enable and optimize their ML models on AWS accelerators.

Collaborate across teams to develop innovative optimization techniques.

About the team The Inference Enablement and Acceleration team fosters a builder’s culture where experimentation is encouraged and impact is measurable. The team emphasizes collaboration, technical ownership, and continuous learning. Senior members mentor, conduct thorough code reviews, and support career growth. Join us to solve some of the most interesting and impactful infrastructure challenges in AI/ML today.

Basic Qualifications

5+ years of non‑professional software development experience.

Bachelor’s degree in computer science or equivalent.

5+ years of experience designing or architecting new and existing systems, focusing on scalability, reliability, and design patterns.

Fundamentals of machine learning and LLMs, their architecture, training and inference lifecycles, and experience optimizing model execution.

Software development experience in C++ and Python (experience in at least one is required).

Strong understanding of system performance, memory management, and parallel computing principles.

Proficiency in debugging, profiling, and implementing best software engineering practices in large‑scale systems.

Familiarity with PyTorch, JIT compilation, and AOT tracing.

Familiarity with CUDA kernels or equivalent ML or low‑level kernels.

Experience with online/offline inference serving with vLLM, SGLang, TensorRT or similar platforms in production environments.

Deep understanding of computer architecture, operating systems level software, and parallel computing.

Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status or other legally protected status.

#J-18808-Ljbffr