Advanced Micro Devices, Inc.
Principal/Senior GPU Software Performance Engineer - Training at Scale
Advanced Micro Devices, Inc., San Jose, California, United States, 95199
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, our mission is to build great products that accelerate next-generation computing experiences-from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges-striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond.
THE ROLE: We train large models across multiGPU clusters. Your charter is to
make training materially faster and cheaper
by leading kernel-level performance engineering—from math kernels and fused epilogues to cluster-level throughput—partnering with researchers, framework teams, and infrastructure.
KEY RESPONSIBILITIES:
Own kernel performance:
Design, implement, and land highimpact HIP/C++ kernels (e.g., attention, layernorm, softmax, GEMM/epilogues, fused pointwise) that are wavesize portable and optimized for LDS, caches, and MFMA units.
Lead profiling & tuning:
Build repeatable workflows with timelines, hardware counters, and roofline analysis; remove memory bottlenecks; tune launch geometry/occupancy; validate speedups with A/B harnesses.
Drive fusion & algorithmic improvements:
Identify profitable fusions, tiling strategies, vectorized I/O, sharedmemory/scratchpad layouts, asynchronous pipelines, and warp/wavelevel collectives-while maintaining numerical stability.
Influence frameworks & libraries:
Upstream or extend performancecritical ops in PyTorch/JAX/XLA/Triton; evaluate and integrate vendor math libraries; guide compiler/codegen choices for target architectures.
Scale beyond one GPU:
Optimize P2P and collective comms, overlap compute/comm, and improve data/pipeline/tensor parallelism throughput across nodes.
Benchmarking & SLOs:
Define and own KPIs (throughput, timetotrain, $/step, energy/step); maintain dashboards, perf CI gates, and regression triage.
Technical leadership:
Mentor senior engineers, set coding/perf standards, lead performance "war rooms," and partner with silicon/vendor teams on microarchitecture-aware optimizations.
Quality & reliability:
Build reproducible perf harnesses, deterministic test modes, and documentation/playbooks so improvements persist releaseoverrelease.
PREFERRED EXPERIENCE:
Experience in systems/HPC/ML performance engineering, with hands on GPU kernel work and shipped optimizations in production training or HPC.
Expert in modern C++ (C++17+)
and at least one
GPU programming model
(CUDA, HIP, or SYCL/oneAPI)
or
a GPU kernel DSL (e.g., Triton); comfortable with templates, memory qualifiers, atomics, and warp/wavelevel collectives.
Deep understanding of
GPU microarchitecture : SIMT execution, occupancy vs. register/scratchpad pressure, memory hierarchy (global/L2/shared or LDS), coalescing, bank conflicts, vectorization, and instructionlevel parallelism.
Proficiency with
profiling & analysis : timelines and counters (e.g., Nsight Systems/Compute, rocprof/Omniperf, VTune/GPA or equivalents), ISA/disassembly inspection, and correlating metrics to code changes.
Proven track record reducing
timetotrain
or
$perstep
via kernel and collectivecomms optimizations on multiGPU clusters.
Strong
Linux
fundamentals (perf/eBPF, NUMA, PCIe/links), build systems (CMake/Bazel), Python, and containerized dev (Docker/Podman).
Experience with
distributed training
(PyTorch DDP/FSDP/ZeRO/DeepSpeed or JAX) and GPU collectives.
Expertise in
mixed precision
(BF16/FP16/FP8), numerics, and stability/accuracy validation at kernel boundaries.
Background in
compiler/IR
(LLVM/MLIR) or codegen for GPU backends; ability to guide optimization passes with performance goals.
Handson with
cluster orchestration
(Slurm/Kubernetes), IB/RDMA tuning, and compute/communication overlap strategies.
ACADEMIC CREDENTIALS:
Master's degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent.
LOCATION: San Jose, CA
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.
#J-18808-Ljbffr
THE ROLE: We train large models across multiGPU clusters. Your charter is to
make training materially faster and cheaper
by leading kernel-level performance engineering—from math kernels and fused epilogues to cluster-level throughput—partnering with researchers, framework teams, and infrastructure.
KEY RESPONSIBILITIES:
Own kernel performance:
Design, implement, and land highimpact HIP/C++ kernels (e.g., attention, layernorm, softmax, GEMM/epilogues, fused pointwise) that are wavesize portable and optimized for LDS, caches, and MFMA units.
Lead profiling & tuning:
Build repeatable workflows with timelines, hardware counters, and roofline analysis; remove memory bottlenecks; tune launch geometry/occupancy; validate speedups with A/B harnesses.
Drive fusion & algorithmic improvements:
Identify profitable fusions, tiling strategies, vectorized I/O, sharedmemory/scratchpad layouts, asynchronous pipelines, and warp/wavelevel collectives-while maintaining numerical stability.
Influence frameworks & libraries:
Upstream or extend performancecritical ops in PyTorch/JAX/XLA/Triton; evaluate and integrate vendor math libraries; guide compiler/codegen choices for target architectures.
Scale beyond one GPU:
Optimize P2P and collective comms, overlap compute/comm, and improve data/pipeline/tensor parallelism throughput across nodes.
Benchmarking & SLOs:
Define and own KPIs (throughput, timetotrain, $/step, energy/step); maintain dashboards, perf CI gates, and regression triage.
Technical leadership:
Mentor senior engineers, set coding/perf standards, lead performance "war rooms," and partner with silicon/vendor teams on microarchitecture-aware optimizations.
Quality & reliability:
Build reproducible perf harnesses, deterministic test modes, and documentation/playbooks so improvements persist releaseoverrelease.
PREFERRED EXPERIENCE:
Experience in systems/HPC/ML performance engineering, with hands on GPU kernel work and shipped optimizations in production training or HPC.
Expert in modern C++ (C++17+)
and at least one
GPU programming model
(CUDA, HIP, or SYCL/oneAPI)
or
a GPU kernel DSL (e.g., Triton); comfortable with templates, memory qualifiers, atomics, and warp/wavelevel collectives.
Deep understanding of
GPU microarchitecture : SIMT execution, occupancy vs. register/scratchpad pressure, memory hierarchy (global/L2/shared or LDS), coalescing, bank conflicts, vectorization, and instructionlevel parallelism.
Proficiency with
profiling & analysis : timelines and counters (e.g., Nsight Systems/Compute, rocprof/Omniperf, VTune/GPA or equivalents), ISA/disassembly inspection, and correlating metrics to code changes.
Proven track record reducing
timetotrain
or
$perstep
via kernel and collectivecomms optimizations on multiGPU clusters.
Strong
Linux
fundamentals (perf/eBPF, NUMA, PCIe/links), build systems (CMake/Bazel), Python, and containerized dev (Docker/Podman).
Experience with
distributed training
(PyTorch DDP/FSDP/ZeRO/DeepSpeed or JAX) and GPU collectives.
Expertise in
mixed precision
(BF16/FP16/FP8), numerics, and stability/accuracy validation at kernel boundaries.
Background in
compiler/IR
(LLVM/MLIR) or codegen for GPU backends; ability to guide optimization passes with performance goals.
Handson with
cluster orchestration
(Slurm/Kubernetes), IB/RDMA tuning, and compute/communication overlap strategies.
ACADEMIC CREDENTIALS:
Master's degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent.
LOCATION: San Jose, CA
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.
#J-18808-Ljbffr