Logo
Advanced Micro Devices, Inc.

Fellow, EPYC AI Product Architecture

Advanced Micro Devices, Inc., Austin, Texas, us, 78716

Save Job

WHAT YOU DO AT AMD CHANGES EVERYTHING

At AMD, our mission is to build great products that accelerate next‑generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.

THE ROLE AMD is seeking a

Fellow of EPYC AI Product Architecture

to lead the definition of next‑generation CPU and platform innovations tailored for AI workloads. You will be a key technical leader within the EPYC AI Product organization, shaping AMD’s AI platform strategy across silicon, systems, and software. This role sits at the intersection of architecture, product definition, customer engagement, and business impact, driving differentiated solutions across cloud, enterprise, and hyperscale deployments.

You will collaborate with world‑class engineers, technologists, and customers to deliver high‑performance, efficient, and scalable platforms for deep learning, generative AI, recommendation systems, and classical ML. You will engage deeply with AMD’s silicon, platform, and software teams to translate workload insights into architectural innovations and platform capabilities that shape the future of AI compute.

THE PERSON We are looking for a visionary, results‑driven technical leader with a deep understanding of AI workload requirements and system‑level architecture. You combine

technical breadth

across CPUs, servers, and AI acceleration platforms with

customer fluency

and

strategic business insight . You are equally comfortable engaging in low‑level performance modeling as you are briefing customers, analysts, and press on AMD’s roadmap and product direction.

Key Attributes:

Deep technical expertise in

CPU and server architecture

for AI workloads

Proven track record influencing

AI platform design

at the pod, rack, or datacenter scale

Strong understanding of

AI software ecosystems , frameworks, and optimization flows

Data‑driven mindset, with ability to analyze and forecast workload performance across complex systems

Exceptional communicator who can translate technical complexity into compelling product narratives

KEY RESPONSIBILITIES

Lead architecture definition

for AMD EPYC CPU and server platforms optimized for AI training and inference

Engage with

hyperscalers, OEMs, and AI ISVs

to align platform features with evolving workload needs

Evaluate and drive

new CPU and platform features

for deep learning models, including generative AI, vision, and recommender systems

Analyze performance bottlenecks using architecture simulation and hardware instrumentation; propose workload‑driven improvements

Drive architectural trade‑off analyses across

compute, memory, I/O, and network subsystems

Build and refine

performance models, automation tools, and workload testbeds

for end‑to‑end analysis

Project and compare performance vs TCO tradeoffs under different system and silicon configurations

Shape AMD’s platform strategy for

heterogeneous compute , working closely with GPU and AI accelerator teams

Represent AMD in

industry forums , customer briefings, analyst interactions, and press engagements

PREFERRED EXPERIENCE

10+ years in

high‑performance CPU, server, or AI platform architecture , ideally with customer‑facing responsibilities

Expertise in

AI system deployments at scale

(cloud, enterprise, HPC, or edge)

Demonstrated thought leadership in

Generative AI (LLMs), vision, or recommender systems

Hands‑on experience with

performance tools ,

roofline models , and

system simulation

Familiarity with

AI compilers, quantization flows (QAT/PTQ) , and workload optimization techniques

Proficient in deep learning frameworks such as

PyTorch ,

TensorFlow , and inference runtimes like

ONNX Runtime

or

TensorRT

Understanding of

model deployment pipelines , sparsity techniques, advanced numeric formats, and mixed precision

Optional: CUDA programming or Hugging Face pipelines

Track record of

cross‑functional leadership

and working in fast‑paced, ambiguous environments

ACADEMIC & PROFESSIONAL CREDENTIALS

MS or PhD in Computer Engineering, Computer Science, Electrical Engineering, or a related field

Recognized industry or academic thought leader;

publications and patents in AI architecture

a strong plus

LOCATION Preferred locations:

Santa Clara, CA

or

Austin, TX

Benefits offered are described: AMD benefits at a glance.

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee‑based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third‑party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.

#J-18808-Ljbffr