Logo
ByteDance

Senior Research Engineer / Scientist - Storage for LLM Technology - Infrastructu

ByteDance, Seattle, Washington, us, 98127

Save Job

Join us as we work together to inspire creativity and enrich life around the globe. Location: Seattle Team: Infrastructure Employment Type: Regular Job Code: A152690 Responsibilities

Design and implement a distributed KV cache system to store and retrieve intermediate states (e.g., attention keys/values) for transformer-based LLMs across GPUs or nodes. Optimize low-latency access and eviction policies for caching long-context LLM inputs, token streams, and reused embeddings. Collaborate with inference and serving teams to integrate the cache with token streaming pipelines, batched decoding, and model parallelism. Develop cache consistency and synchronization protocols for multi-tenant, multi-request environments. Implement memory-aware sharding, eviction (e.g., windowed LRU, TTL), and replication strategies across GPUs or distributed memory backends. Monitor system performance and iterate on caching algorithms to reduce compute costs and response time for inference workloads. Evaluate and, where needed, extend open-source KV stores or build custom GPU-aware caching layers (e.g., CUDA, Triton, shared memory, RDMA). Minimum Qualifications

PhD in Computer Science, Applied Mathematics, Electrical Engineering, or a related technical field. Strong understanding of transformer-based model internals and how KV caching affects autoregressive decoding. Experience with distributed systems, memory management, and low-latency serving (RPC, gRPC, CUDA-aware networking). Familiarity with high-performance compute environments (NVIDIA GPUs, TensorRT, Triton Inference Server). Proficiency in languages like C++, Rust, Go, or CUDA for systems-level development. Preferred Qualifications

Prior experience building inference-serving systems for LLMs (e.g., vLLM, SGLang, FasterTransformer, DeepSpeed, Hugging Face Text Generation Inference). Experience with memory hierarchy optimization (HBM, NUMA, NVLink) and GPU-to-GPU communication (NCCL, GDR, GDS, InfiniBand). Exposure to cache-aware scheduling, batching, and prefetching strategies in model serving. About Us

Founded in 2012, ByteDance's mission is to inspire creativity and enrich life. With a suite of more than a dozen products, including TikTok, Lemon8, CapCut and Pico as well as platforms specific to the China market, including Toutiao, Douyin, and Xigua, ByteDance has made it easier and more fun for people to connect with, consume, and create content. Why Join ByteDance

Inspiring creativity is at the core of ByteDance's mission. Our innovative products are built to help people authentically express themselves, discover and connect – and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and enrich life - a mission we work towards every day. As ByteDancers, we strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. By constantly iterating and fostering an "Always Day 1" mindset, we achieve meaningful breakthroughs for ourselves, our Company, and our users. When we create and grow together, the possibilities are limitless. Join us. Diversity & Inclusion

ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too. Reasonable Accommodation

ByteDance is committed to providing reasonable accommodations in our recruitment processes for candidates with disabilities, pregnancy, sincerely held religious beliefs or other reasons protected by applicable laws. If you need assistance or a reasonable accommodation, please reach out to us at https://tinyurl.com/RA-request

#J-18808-Ljbffr