Pangleglobal
Software Engineer Graduate (Inference Infrastructure) - 2026 Start (PHD)
Pangleglobal, San Jose, California, United States, 95199
Software Engineer Graduate (Inference Infrastructure) - 2026 Start (PHD)
Location: San Jose Team: Technology Employment Type: Regular The Inference Infrastructure team is the creator and open-source maintainer of AIBrix, a Kubernetes-native control plane for large-scale LLM inference. We are part of ByteDance’s Core Compute Infrastructure organization, responsible for designing and operating the platforms that power microservices, big data, distributed storage, machine learning training and inference, and edge computing across multi-cloud and global datacenters. Our mission is to deliver infrastructure that is highly performant, massively scalable, cost-efficient, and easy to use—enabling both internal and external developers to bring AI workloads from research to production at scale. We are expanding our focus on LLM inference infrastructure to support new AI workloads, and are looking for engineers passionate about cloud-native systems, scheduling, and GPU acceleration. Responsibilities
Design and build large-scale, container-based cluster management and orchestration systems with extreme performance, scalability, and resilience. Architect next-generation cloud-native GPU and AI accelerator infrastructure to deliver cost-efficient and secure ML platforms. Collaborate across teams to deliver world-class inference solutions using vLLM, SGLang, TensorRT-LLM, and other LLM engines. Stay current with the latest advances in open source (Kubernetes, Ray, etc.), AI/ML and LLM infrastructure, and systems research; integrate best practices into production systems. Write high-quality, production-ready code that is maintainable, testable, and scalable. Qualifications
Minimum Qualifications: B.S./M.S. in Computer Science, Computer Engineering, or related fields with 2+ years of relevant experience (Ph.D. with strong systems/ML publications also considered). Strong understanding of large model inference, distributed and parallel systems, and/or high-performance networking systems. Hands-on experience building cloud or ML infrastructure in areas such as resource management, scheduling, request routing, monitoring, or orchestration. Solid knowledge of container and orchestration technologies (Docker, Kubernetes). Proficiency in at least one major programming language (Go, Rust, Python, or C++). Preferred Qualifications: Experience contributing to or operating large-scale cluster management systems (e.g., Kubernetes, Ray). Experience with workload scheduling, GPU orchestration, scaling, and isolation in production environments. Hands-on experience with GPU programming (CUDA) or inference engines (vLLM, SGLang, TensorRT-LLM). Familiarity with public cloud providers (AWS, Azure, GCP) and their ML platforms (SageMaker, Azure ML, Vertex AI). Strong knowledge of ML systems (Ray, DeepSpeed, PyTorch) and distributed training/inference platforms. Excellent communication skills and ability to collaborate across global, cross-functional teams. Passion for system efficiency, performance optimization, and open-source innovation. ByteDance is an equal opportunities employer and welcomes applications from all qualified candidates. We are committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. We offer a competitive salary range of $136,800 - $259,200 annually, as well as a comprehensive benefits package, including medical, dental, and vision insurance, a 401(k) savings plan with company match, paid parental leave, short-term and long-term disability coverage, life insurance, wellbeing benefits, and 10 paid holidays per year.
#J-18808-Ljbffr
Location: San Jose Team: Technology Employment Type: Regular The Inference Infrastructure team is the creator and open-source maintainer of AIBrix, a Kubernetes-native control plane for large-scale LLM inference. We are part of ByteDance’s Core Compute Infrastructure organization, responsible for designing and operating the platforms that power microservices, big data, distributed storage, machine learning training and inference, and edge computing across multi-cloud and global datacenters. Our mission is to deliver infrastructure that is highly performant, massively scalable, cost-efficient, and easy to use—enabling both internal and external developers to bring AI workloads from research to production at scale. We are expanding our focus on LLM inference infrastructure to support new AI workloads, and are looking for engineers passionate about cloud-native systems, scheduling, and GPU acceleration. Responsibilities
Design and build large-scale, container-based cluster management and orchestration systems with extreme performance, scalability, and resilience. Architect next-generation cloud-native GPU and AI accelerator infrastructure to deliver cost-efficient and secure ML platforms. Collaborate across teams to deliver world-class inference solutions using vLLM, SGLang, TensorRT-LLM, and other LLM engines. Stay current with the latest advances in open source (Kubernetes, Ray, etc.), AI/ML and LLM infrastructure, and systems research; integrate best practices into production systems. Write high-quality, production-ready code that is maintainable, testable, and scalable. Qualifications
Minimum Qualifications: B.S./M.S. in Computer Science, Computer Engineering, or related fields with 2+ years of relevant experience (Ph.D. with strong systems/ML publications also considered). Strong understanding of large model inference, distributed and parallel systems, and/or high-performance networking systems. Hands-on experience building cloud or ML infrastructure in areas such as resource management, scheduling, request routing, monitoring, or orchestration. Solid knowledge of container and orchestration technologies (Docker, Kubernetes). Proficiency in at least one major programming language (Go, Rust, Python, or C++). Preferred Qualifications: Experience contributing to or operating large-scale cluster management systems (e.g., Kubernetes, Ray). Experience with workload scheduling, GPU orchestration, scaling, and isolation in production environments. Hands-on experience with GPU programming (CUDA) or inference engines (vLLM, SGLang, TensorRT-LLM). Familiarity with public cloud providers (AWS, Azure, GCP) and their ML platforms (SageMaker, Azure ML, Vertex AI). Strong knowledge of ML systems (Ray, DeepSpeed, PyTorch) and distributed training/inference platforms. Excellent communication skills and ability to collaborate across global, cross-functional teams. Passion for system efficiency, performance optimization, and open-source innovation. ByteDance is an equal opportunities employer and welcomes applications from all qualified candidates. We are committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. We offer a competitive salary range of $136,800 - $259,200 annually, as well as a comprehensive benefits package, including medical, dental, and vision insurance, a 401(k) savings plan with company match, paid parental leave, short-term and long-term disability coverage, life insurance, wellbeing benefits, and 10 paid holidays per year.
#J-18808-Ljbffr