Logo
ByteDance

Software Engineer Graduate (Inference Infrastructure) - 2026 Start (PHD)

ByteDance, Seattle, Washington, us, 98127

Save Job

Join us as we work together to inspire creativity and enrich life around the globe. Seattle Team: Technology Employment Type: Regular About the Team

The Inference Infrastructure team is the creator and open-source maintainer of AIBrix, a Kubernetes-native control plane for large-scale LLM inference. We are part of ByteDance’s Core Compute Infrastructure organization, responsible for designing and operating the platforms that power microservices, big data, distributed storage, machine learning training and inference, and edge computing across multi-cloud and global datacenters. Our mission is to deliver infrastructure that is highly performant, massively scalable, cost-efficient, and easy to use—enabling both internal and external developers to bring AI workloads from research to production at scale. Responsibilities

Design and build large-scale, container-based cluster management and orchestration systems with extreme performance, scalability, and resilience. Architect next-generation cloud-native GPU and AI accelerator infrastructure to deliver cost-efficient and secure ML platforms. Collaborate across teams to deliver world-class inference solutions using vLLM, SGLang, TensorRT-LLM, and other LLM engines. Stay current with the latest advances in open source (Kubernetes, Ray, etc.), AI/ML and LLM infrastructure, and systems research; integrate best practices into production systems. Write high-quality, production-ready code that is maintainable, testable, and scalable. Qualifications

Minimum Qualifications: B.S./M.S. in Computer Science, Computer Engineering, or related fields with 2+ years of relevant experience (Ph.D. with strong systems/ML publications also considered). Strong understanding of large model inference, distributed and parallel systems, and/or high-performance networking systems. Hands-on experience building cloud or ML infrastructure in areas such as resource management, scheduling, request routing, monitoring, or orchestration. Solid knowledge of container and orchestration technologies (Docker, Kubernetes). Proficiency in at least one major programming language (Go, Rust, Python, or C++). Preferred Qualifications: Experience contributing to or operating large-scale cluster management systems (e.g., Kubernetes, Ray). Experience with workload scheduling, GPU orchestration, scaling, and isolation in production environments. Hands-on experience with GPU programming (CUDA) or inference engines (vLLM, SGLang, TensorRT-LLM). Familiarity with public cloud providers (AWS, Azure, GCP) and their ML platforms (SageMaker, Azure ML, Vertex AI). Strong knowledge of ML systems (Ray, DeepSpeed, PyTorch) and distributed training/inference platforms. Excellent communication skills and ability to collaborate across global, cross-functional teams. Passion for system efficiency, performance optimization, and open-source innovation. By submitting an application for this role, you accept and agree to our global applicant privacy policy, which may be accessed here: https://jobs.bytedance.com/en/legal/privacy ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too. ByteDance is committed to providing reasonable accommodations in our recruitment processes for candidates with disabilities, pregnancy, sincerely held religious beliefs or other reasons protected by applicable laws. If you need assistance or a reasonable accommodation, please reach out to us at https://tinyurl.com/RA-request

#J-18808-Ljbffr