ByteDance
Research Engineer / Scientist - Storage for LLM
Location: Seattle
Team: Infrastructure
Employment Type: Regular
Job Code: A193071
About the Team The Infrastructure System Lab is a hybrid research and engineering group focused on building next‑generation AI‑native data infrastructure. Positioned at the intersection of databases, large‑scale systems, and AI, the team leads innovation in areas such as vector and multi‑modal databases, infrastructure optimization through machine learning, and LLM‑based tooling like NL2SQL and NL2Chart. They also develop high‑performance cache systems, including multi‑engine key‑value stores and LLM inference KV caches. The team thrives on collaboration, with researchers and engineers working closely to take ideas from paper to prototype to production. Their work supports key products used by millions and is regularly published and deployed at scale.
About the Role We are seeking a systems researcher or engineer with deep expertise in large‑scale distributed storage and caching infrastructure to design and maintain a high‑performance KV cache layer for large language model (LLM) inference. This role focuses on improving latency, throughput, and cost‑efficiency in transformer‑based model serving by optimizing the reuse of attention key‑value states and prompt embeddings. You’ll work on cutting‑edge AI systems problems with real‑world impact, alongside a world‑class team. The role offers opportunities to publish, contribute to open‑source, attend top conferences, and enjoy competitive compensation, generous research resources, and an innovation‑driven culture.
Responsibilities
Design and implement a distributed KV cache system to store and retrieve intermediate states (e.g., attention keys/values) for transformer‑based LLMs across GPUs or nodes.
Optimize low‑latency access and eviction policies for caching long‑context LLM inputs, token streams, and reused embeddings.
Collaborate with inference and serving teams to integrate the cache with token streaming pipelines, batched decoding, and model parallelism.
Develop cache consistency and synchronization protocols for multi‑tenant, multi‑request environments.
Implement memory‑aware sharding, eviction (e.g., windowed LRU, TTL), and replication strategies across GPUs or distributed memory backends.
Monitor system performance and iterate on caching algorithms to reduce compute costs and response time for inference workloads.
Evaluate and, where needed, extend open‑source KV stores or build custom GPU‑aware caching layers (e.g., CUDA, Triton, shared memory, RDMA).
Qualifications Minimum Qualifications
PhD in Computer Science, Applied Mathematics, Electrical Engineering, or a related technical field.
Strong understanding of transformer‑based model internals and how KV caching affects autoregressive decoding.
Experience with distributed systems, memory management, and low‑latency serving (RPC, gRPC, CUDA‑aware networking).
Familiarity with high‑performance compute environments (NVIDIA GPUs, TensorRT, Triton Inference Server).
Proficiency in languages like C++, Rust, Go, or CUDA for systems‑level development.
Preferred Qualifications
Prior experience building inference‑serving systems for LLMs (e.g., vLLM, SGLang, FasterTransformer, DeepSpeed, Hugging Face Text Generation Inference).
Experience with memory hierarchy optimization (HBM, NUMA, NVLink) and GPU‑to‑GPU communication (NCCL, GDR, GDS, InfiniBand).
Exposure to cache‑aware scheduling, batching, and prefetching strategies in model serving.
Job Information The base salary range for this position in Seattle is $129,960 - $246,240 annually. Compensation may vary outside of this range depending on a candidate’s qualifications, skills, competencies and experience, and location. Base pay is one part of the Total Package that is provided to compensate and recognize employees for their work, and this role may be eligible for additional discretionary bonuses/incentives, and restricted stock units.
Benefits
Day‑one access to medical, dental, and vision insurance.
401(k) savings plan with company match.
Paid parental leave.
Short‑term and long‑term disability coverage.
Life insurance and wellbeing benefits.
10 paid holidays per year, 10 paid sick days per year, and 17 days of Paid Personal Time (prorated upon hire with increasing accruals by tenure).
Employment Eligibility (Los Angeles County) Qualified applicants with arrest or conviction records will be considered for employment in accordance with all federal, state, and local laws including the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Our company believes that criminal history may have a direct, adverse and negative relationship on the following job duties, potentially resulting in the withdrawal of the conditional offer of employment:
Interacting and occasionally having unsupervised contact with internal/external clients and/or colleagues;
Appropriately handling and managing confidential information including proprietary and trade secret information and access to information technology systems;
Exercising sound judgment.
#J-18808-Ljbffr
Team: Infrastructure
Employment Type: Regular
Job Code: A193071
About the Team The Infrastructure System Lab is a hybrid research and engineering group focused on building next‑generation AI‑native data infrastructure. Positioned at the intersection of databases, large‑scale systems, and AI, the team leads innovation in areas such as vector and multi‑modal databases, infrastructure optimization through machine learning, and LLM‑based tooling like NL2SQL and NL2Chart. They also develop high‑performance cache systems, including multi‑engine key‑value stores and LLM inference KV caches. The team thrives on collaboration, with researchers and engineers working closely to take ideas from paper to prototype to production. Their work supports key products used by millions and is regularly published and deployed at scale.
About the Role We are seeking a systems researcher or engineer with deep expertise in large‑scale distributed storage and caching infrastructure to design and maintain a high‑performance KV cache layer for large language model (LLM) inference. This role focuses on improving latency, throughput, and cost‑efficiency in transformer‑based model serving by optimizing the reuse of attention key‑value states and prompt embeddings. You’ll work on cutting‑edge AI systems problems with real‑world impact, alongside a world‑class team. The role offers opportunities to publish, contribute to open‑source, attend top conferences, and enjoy competitive compensation, generous research resources, and an innovation‑driven culture.
Responsibilities
Design and implement a distributed KV cache system to store and retrieve intermediate states (e.g., attention keys/values) for transformer‑based LLMs across GPUs or nodes.
Optimize low‑latency access and eviction policies for caching long‑context LLM inputs, token streams, and reused embeddings.
Collaborate with inference and serving teams to integrate the cache with token streaming pipelines, batched decoding, and model parallelism.
Develop cache consistency and synchronization protocols for multi‑tenant, multi‑request environments.
Implement memory‑aware sharding, eviction (e.g., windowed LRU, TTL), and replication strategies across GPUs or distributed memory backends.
Monitor system performance and iterate on caching algorithms to reduce compute costs and response time for inference workloads.
Evaluate and, where needed, extend open‑source KV stores or build custom GPU‑aware caching layers (e.g., CUDA, Triton, shared memory, RDMA).
Qualifications Minimum Qualifications
PhD in Computer Science, Applied Mathematics, Electrical Engineering, or a related technical field.
Strong understanding of transformer‑based model internals and how KV caching affects autoregressive decoding.
Experience with distributed systems, memory management, and low‑latency serving (RPC, gRPC, CUDA‑aware networking).
Familiarity with high‑performance compute environments (NVIDIA GPUs, TensorRT, Triton Inference Server).
Proficiency in languages like C++, Rust, Go, or CUDA for systems‑level development.
Preferred Qualifications
Prior experience building inference‑serving systems for LLMs (e.g., vLLM, SGLang, FasterTransformer, DeepSpeed, Hugging Face Text Generation Inference).
Experience with memory hierarchy optimization (HBM, NUMA, NVLink) and GPU‑to‑GPU communication (NCCL, GDR, GDS, InfiniBand).
Exposure to cache‑aware scheduling, batching, and prefetching strategies in model serving.
Job Information The base salary range for this position in Seattle is $129,960 - $246,240 annually. Compensation may vary outside of this range depending on a candidate’s qualifications, skills, competencies and experience, and location. Base pay is one part of the Total Package that is provided to compensate and recognize employees for their work, and this role may be eligible for additional discretionary bonuses/incentives, and restricted stock units.
Benefits
Day‑one access to medical, dental, and vision insurance.
401(k) savings plan with company match.
Paid parental leave.
Short‑term and long‑term disability coverage.
Life insurance and wellbeing benefits.
10 paid holidays per year, 10 paid sick days per year, and 17 days of Paid Personal Time (prorated upon hire with increasing accruals by tenure).
Employment Eligibility (Los Angeles County) Qualified applicants with arrest or conviction records will be considered for employment in accordance with all federal, state, and local laws including the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Our company believes that criminal history may have a direct, adverse and negative relationship on the following job duties, potentially resulting in the withdrawal of the conditional offer of employment:
Interacting and occasionally having unsupervised contact with internal/external clients and/or colleagues;
Appropriately handling and managing confidential information including proprietary and trade secret information and access to information technology systems;
Exercising sound judgment.
#J-18808-Ljbffr