Apple Inc.
Seattle, Washington, United States Software and Services
Imagine what you could do here. At Apple, innovative ideas have a way of becoming extraordinary products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish! As part of Apple Cloud AI, we are building the next generation of ML infrastructure that powers AI capabilities across Apple's products and services. Our team tackles some of the most challenging problems in the industry - optimizing LLM inference at massive scale, building distributed training systems that push the boundaries of GPU and TPU utilization, and architecting model serving platforms that deliver sub-millisecond latency for real-time AI experiences. You'll work with cutting-edge technologies including vLLM, Ray, TensorRT-LLM, TPU Infrastructure, and custom inference engines, while shaping how foundation models are trained, fine‑tuned, and deployed across Apple's ecosystem. As a Lead GenAI/ML Engineer, you will architect high‑performance ML systems from the ground up - designing efficient KV‑cache strategies, implementing speculative decoding, optimizing tensor parallelism across GPU and TPU clusters, and building the infrastructure that brings Apple's most ambitious AI capabilities to life.
Description This role requires translating cutting‑edge ML research into production‑ready systems that meet the demanding requirements of Apple's ML workloads. You will work closely with research teams to productionize new model architectures and optimization techniques. We are looking for candidates who thrive at the intersection of ML research and systems engineering - someone who can read a paper on FlashAttention or PagedAttention and implement a production‑grade version, or who can profile a training job and identify opportunities to improve GPU utilization from 40% to 80%.
Responsibilities
In this role, you will have significant responsibilities in advancing the technical capabilities of Apple Cloud AI by building robust, scalable ML infrastructure. You will influence the technical direction of our ML platform by driving innovation in distributed training, inference optimization, and model serving systems. Core Focus Areas:
LLM Inference Optimization: Design and implement high‑performance inference pipelines, including KV‑cache optimization, continuous batching, speculative decoding, and quantization strategies (INT8, FP8, AWQ, GPTQ)
Distributed Training Systems: Build and optimize large‑scale training infrastructure across GPU and TPU clusters, implementing efficient data/tensor/pipeline parallelism strategies
Model Serving at Scale: Architect low‑latency serving systems capable of handling Apple‑scale traffic with strict SLA requirements
Hardware‑Aware Optimization: Deep optimization for NVIDIA GPUs (H100, B200) and Google TPUs, including custom CUDA kernels and XLA optimizations
Minimum Qualifications
8+ years of experience in ML systems engineering, with at least 3 years focused on LLM/GenAI infrastructure
Deep expertise in LLM inference optimization: KV‑cache management, batching strategies, quantization, speculative decoding
Strong proficiency in Python and C++/CUDA for performance‑critical code
Hands‑on experience with inference frameworks: vLLM, TensorRT‑LLM, Triton Inference Server, or equivalent
Experience with distributed training at scale using frameworks like DeepSpeed, Megatron‑LM, FSDP, or Ray Train
Solid understanding of transformer architectures and attention mechanisms at the implementation level
Experience optimizing ML workloads on NVIDIA GPUs (profiling, memory optimization, kernel tuning)
Track record of taking ML systems from research/prototype to production at scale
MS or PhD in Computer Science, Machine Learning, or equivalent practical experience
Preferred Qualifications
Experience with TPU infrastructure (JAX/XLA, TPU training/serving optimization)
Contributions to open‑source ML infrastructure projects (vLLM, Ray, TensorRT‑LLM, etc.)
Experience with custom CUDA kernel development or Triton (OpenAI)
Deep knowledge of model compression techniques: pruning, distillation, mixed‑precision training
Experience with multi‑node training orchestration and fault tolerance
Familiarity with emerging architectures: MoE models, linear attention variants, state‑space models
Experience building ML platforms serving high QPS with strict latency requirements
At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $171,600 and $302,200, and your base pay will depend on your skills, qualifications, experience, and location.
Apple employees also have the opportunity to become an Apple shareholder through participation in Apple’s discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple’s Employee Stock Purchase Plan. You’ll also receive benefits including: comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses — including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits.
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant.
Apple accepts applications to this posting on an ongoing basis.
#J-18808-Ljbffr
Imagine what you could do here. At Apple, innovative ideas have a way of becoming extraordinary products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish! As part of Apple Cloud AI, we are building the next generation of ML infrastructure that powers AI capabilities across Apple's products and services. Our team tackles some of the most challenging problems in the industry - optimizing LLM inference at massive scale, building distributed training systems that push the boundaries of GPU and TPU utilization, and architecting model serving platforms that deliver sub-millisecond latency for real-time AI experiences. You'll work with cutting-edge technologies including vLLM, Ray, TensorRT-LLM, TPU Infrastructure, and custom inference engines, while shaping how foundation models are trained, fine‑tuned, and deployed across Apple's ecosystem. As a Lead GenAI/ML Engineer, you will architect high‑performance ML systems from the ground up - designing efficient KV‑cache strategies, implementing speculative decoding, optimizing tensor parallelism across GPU and TPU clusters, and building the infrastructure that brings Apple's most ambitious AI capabilities to life.
Description This role requires translating cutting‑edge ML research into production‑ready systems that meet the demanding requirements of Apple's ML workloads. You will work closely with research teams to productionize new model architectures and optimization techniques. We are looking for candidates who thrive at the intersection of ML research and systems engineering - someone who can read a paper on FlashAttention or PagedAttention and implement a production‑grade version, or who can profile a training job and identify opportunities to improve GPU utilization from 40% to 80%.
Responsibilities
In this role, you will have significant responsibilities in advancing the technical capabilities of Apple Cloud AI by building robust, scalable ML infrastructure. You will influence the technical direction of our ML platform by driving innovation in distributed training, inference optimization, and model serving systems. Core Focus Areas:
LLM Inference Optimization: Design and implement high‑performance inference pipelines, including KV‑cache optimization, continuous batching, speculative decoding, and quantization strategies (INT8, FP8, AWQ, GPTQ)
Distributed Training Systems: Build and optimize large‑scale training infrastructure across GPU and TPU clusters, implementing efficient data/tensor/pipeline parallelism strategies
Model Serving at Scale: Architect low‑latency serving systems capable of handling Apple‑scale traffic with strict SLA requirements
Hardware‑Aware Optimization: Deep optimization for NVIDIA GPUs (H100, B200) and Google TPUs, including custom CUDA kernels and XLA optimizations
Minimum Qualifications
8+ years of experience in ML systems engineering, with at least 3 years focused on LLM/GenAI infrastructure
Deep expertise in LLM inference optimization: KV‑cache management, batching strategies, quantization, speculative decoding
Strong proficiency in Python and C++/CUDA for performance‑critical code
Hands‑on experience with inference frameworks: vLLM, TensorRT‑LLM, Triton Inference Server, or equivalent
Experience with distributed training at scale using frameworks like DeepSpeed, Megatron‑LM, FSDP, or Ray Train
Solid understanding of transformer architectures and attention mechanisms at the implementation level
Experience optimizing ML workloads on NVIDIA GPUs (profiling, memory optimization, kernel tuning)
Track record of taking ML systems from research/prototype to production at scale
MS or PhD in Computer Science, Machine Learning, or equivalent practical experience
Preferred Qualifications
Experience with TPU infrastructure (JAX/XLA, TPU training/serving optimization)
Contributions to open‑source ML infrastructure projects (vLLM, Ray, TensorRT‑LLM, etc.)
Experience with custom CUDA kernel development or Triton (OpenAI)
Deep knowledge of model compression techniques: pruning, distillation, mixed‑precision training
Experience with multi‑node training orchestration and fault tolerance
Familiarity with emerging architectures: MoE models, linear attention variants, state‑space models
Experience building ML platforms serving high QPS with strict latency requirements
At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $171,600 and $302,200, and your base pay will depend on your skills, qualifications, experience, and location.
Apple employees also have the opportunity to become an Apple shareholder through participation in Apple’s discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple’s Employee Stock Purchase Plan. You’ll also receive benefits including: comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses — including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits.
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant.
Apple accepts applications to this posting on an ongoing basis.
#J-18808-Ljbffr