Advanced Micro Devices , Inc.
Senior Staff AI Infrastructure Engineer
Advanced Micro Devices , Inc., Santa Clara, California, us, 95053
WHAT YOU DO AT AMD CHANGES EVERYTHING
At AMD, our mission is to build great products that accelerate next-generation computing experiences-from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges-striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond.
Together, we advance your career.
Welcome to the Llama team-where curiosity runs wild, and innovation is our natural habitat. We're a tight-knit group of passionate builders, educators, and AI explorers at AMD, united by a shared mission: to push the boundaries of what's possible with generative AI and make cutting-edge knowledge accessible to developers everywhere.
THE ROLE:
AMD is looking for a Senior Staff AI Infra Engineer who is passionate about improving the performance of key applications and benchmarks, with a special focus on AI/ML workloads and GPU-accelerated computing. As an Senior Staff Engineer, you will be a technical leader within a core project and will work at the intersection of hardware and software to optimize performance for next-generation AI applications, including Large Language Models (LLMs) and Agentic AI systems. You will work with the very latest hardware and software technology, providing technical leadership while driving complex technical initiatives.
THE PERSON:
The ideal candidate should be passionate about software engineering and possess strong leadership skills to drive sophisticated issues to resolution. Must demonstrate technical depth and breadth in both traditional computing and emerging AI technologies, with the ability to influence technical direction and mentor other engineers. Able to communicate effectively and work optimally with different teams across AMD.
Key Responsibilities:
•
Lead technical initiatives and provide architectural guidance for AI/ML infrastructure and performance optimization. • Optimize and accelerate LLM training and inference on AMD GPUs, improving kernel, communication, and end-to-end system efficiency. • Develop and enhance infrastructure supporting LLMs, Agentic AI, and RAG systems. • Design, build, and optimize AI workloads on GPU clusters, including large-scale training and inference orchestration, elastic scaling, and workload scheduling across heterogeneous hardware. • Debug and resolve complex system-level performance issues across GPU, network, and runtime layers. • Drive technical excellence, foster cross-team collaboration, and champion innovation within the organization.
Required Experience: • 5+ years of experience in AI/ML infrastructure, distributed systems, or performance-critical software development. • Expert-level proficiency in C/C++ and Python. • Solid understanding of transformer-based architectures and distributed training frameworks such as Megatron-LM, DeepSpeed, and PyTorch Distributed. • Proven experience optimizing LLM training and inference pipelines, including TP/PP/DP/ZeRO parallelism, quantization, and mixed-precision techniques. • Hands-on experience designing, building, and scaling training or inference platforms using Kubernetes, Ray, or Kubeflow. • Familiarity with GPU architecture and distributed communication libraries (e.g., NCCL, RCCL, MPI), with the ability to analyze and optimize multi-GPU training performance. • Experience with profiling and performance-analysis tools for GPU optimization and system-level debugging. • Demonstrated technical ownership, strong communication, and problem-solving skills, with a proven record of delivering end-to-end AI/ML infrastructure solutions
Preferred Qualifications: • In-depth experience with the AMD ROCm ecosystem, including HIP kernel optimization for training and inference. • Hands-on experience with model optimization techniques such as quantization, pruning, and distillation for efficient deployment. • Knowledge of GPU architecture, memory hierarchy, and compiler-level optimization (e.g., kernel fusion, graph scheduling). • Familiarity with Agentic AI systems and autonomous AI workflows, including tool use, reasoning, and multi-agent orchestration for LLM-based applications
Education • Bachelor's degree in Computer Science, Computer Engineering, Electrical Engineering, or related field. • Master's degree preferred; PhD is a strong plus, especially with publications in distributed systems, AI infrastructure, or GPU computing
#LI-TC1
#LI-HYBRID
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.
At AMD, our mission is to build great products that accelerate next-generation computing experiences-from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges-striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond.
Together, we advance your career.
Welcome to the Llama team-where curiosity runs wild, and innovation is our natural habitat. We're a tight-knit group of passionate builders, educators, and AI explorers at AMD, united by a shared mission: to push the boundaries of what's possible with generative AI and make cutting-edge knowledge accessible to developers everywhere.
THE ROLE:
AMD is looking for a Senior Staff AI Infra Engineer who is passionate about improving the performance of key applications and benchmarks, with a special focus on AI/ML workloads and GPU-accelerated computing. As an Senior Staff Engineer, you will be a technical leader within a core project and will work at the intersection of hardware and software to optimize performance for next-generation AI applications, including Large Language Models (LLMs) and Agentic AI systems. You will work with the very latest hardware and software technology, providing technical leadership while driving complex technical initiatives.
THE PERSON:
The ideal candidate should be passionate about software engineering and possess strong leadership skills to drive sophisticated issues to resolution. Must demonstrate technical depth and breadth in both traditional computing and emerging AI technologies, with the ability to influence technical direction and mentor other engineers. Able to communicate effectively and work optimally with different teams across AMD.
Key Responsibilities:
•
Lead technical initiatives and provide architectural guidance for AI/ML infrastructure and performance optimization. • Optimize and accelerate LLM training and inference on AMD GPUs, improving kernel, communication, and end-to-end system efficiency. • Develop and enhance infrastructure supporting LLMs, Agentic AI, and RAG systems. • Design, build, and optimize AI workloads on GPU clusters, including large-scale training and inference orchestration, elastic scaling, and workload scheduling across heterogeneous hardware. • Debug and resolve complex system-level performance issues across GPU, network, and runtime layers. • Drive technical excellence, foster cross-team collaboration, and champion innovation within the organization.
Required Experience: • 5+ years of experience in AI/ML infrastructure, distributed systems, or performance-critical software development. • Expert-level proficiency in C/C++ and Python. • Solid understanding of transformer-based architectures and distributed training frameworks such as Megatron-LM, DeepSpeed, and PyTorch Distributed. • Proven experience optimizing LLM training and inference pipelines, including TP/PP/DP/ZeRO parallelism, quantization, and mixed-precision techniques. • Hands-on experience designing, building, and scaling training or inference platforms using Kubernetes, Ray, or Kubeflow. • Familiarity with GPU architecture and distributed communication libraries (e.g., NCCL, RCCL, MPI), with the ability to analyze and optimize multi-GPU training performance. • Experience with profiling and performance-analysis tools for GPU optimization and system-level debugging. • Demonstrated technical ownership, strong communication, and problem-solving skills, with a proven record of delivering end-to-end AI/ML infrastructure solutions
Preferred Qualifications: • In-depth experience with the AMD ROCm ecosystem, including HIP kernel optimization for training and inference. • Hands-on experience with model optimization techniques such as quantization, pruning, and distillation for efficient deployment. • Knowledge of GPU architecture, memory hierarchy, and compiler-level optimization (e.g., kernel fusion, graph scheduling). • Familiarity with Agentic AI systems and autonomous AI workflows, including tool use, reasoning, and multi-agent orchestration for LLM-based applications
Education • Bachelor's degree in Computer Science, Computer Engineering, Electrical Engineering, or related field. • Master's degree preferred; PhD is a strong plus, especially with publications in distributed systems, AI infrastructure, or GPU computing
#LI-TC1
#LI-HYBRID
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.