Logo
Alibaba Cloud

RDMA Ops Engineer - Computing Infrastructure Networking

Alibaba Cloud, Sunnyvale, California, United States, 94087

Save Job

Overview

We're seeking a skilled RDMA Ops Engineer to optimize and maintain high-performance networking infrastructure for our computing clusters. This role focuses on building and operating ultra-low latency, high-throughput networks using RDMA technologies to power next-generation computing workloads. Responsibilities

Deploy, operate and maintain RDMA-based network architectures (RoCE/InfiniBand) for cluster with thousands of nodes Optimize network performance for distributed collective communication workloads (NCCL, MPI, etc.) Solve complex network issues in distributed collective communication (e.g., NCCL/MPI communication bottlenecks) Use automation tools for network provisioning, monitoring, diagnostics, and network performance profiling (latency/throughput analysis) Implement CI/CD pipelines for network infrastructure-as-code Manage end-to-end network lifecycle: deployment, configuration, monitoring, upgrades Collaborate with computing algorithm engineers to troubleshoot network-related bottlenecks in training/inference pipelines Bridge Computing framework requirements with underlying network infrastructure capabilities Ensure compliance with security and scalability requirements Qualifications

Strong scripting skills (Python/Go/Bash) for operational automation Expert-level RDMA operational experience (RoCEv2/InfiniBand) Understanding of Linux internals (kernel bypass, syscall optimization, etc), and proficient in Linux network stack tuning (irqbalance, NUMA, hugepages) Hands-on experience with RDMA/DPDK performance tuning Strong knowledge of network protocols (TCP/IP, RoCEv2) and NIC architecture principles Ability to abstract complex technical concepts into architectural diagrams Proven track record of translating R&D innovations into production solutions Strong communication skills for cross-functional collaboration with Computing researchers and SRE teams Experience managing production computing networks Familiar with Kubernetes networking (CNI, Multus, SR-IOV) and GPU-aware scheduling Background in computing system optimization (NVIDIA collective libraries, MPI tuning) Deep understanding of computing workload patterns and their network implications Compensation and Employment

The pay range for this position at commencement of employment is expected to be between $104,400 and $171,000/year. However, base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If hired, employee will be in an “at-will position” and the Company reserves the right to modify base salary (as well as any other discretionary payment or compensation program) at any time, including for reasons related to individual performance, Company or individual department/team performance, and market factors.

#J-18808-Ljbffr