Logo
Oracle

Principal Software Engineer, Networking - AI Infrastructure Innovation

Oracle, Seattle, Washington, us, 98127

Save Job

Overview

OCI (Oracle Cloud) AI Infrastructure Innovation team is pioneering the creation of next-generation AI/HPC networking for GPU superclusters at massive scale. Our mission is to design and deliver state-of-the-art RDMA-based networking—spanning frontend and backend fabrics—that enables customers to achieve high performance for AI training and inference. You will define architecture, lead complex system design, and implement innovative networking software that advances RDMA for GPUs and accelerates storage access. If you thrive at the intersection of large-scale distributed systems, high-speed networking, and AI workloads, this role offers the opportunity to push the boundaries of what’s possible.

Responsibilities

Lead architecture, system design, and implementation for high-performance RDMA solutions across OCI’s AI/HPC platforms, including frontend and backend fabrics.

Innovate on network and TCP performance and identify changes required across (Kernel, NIC, switch, transport, protocol, storage, GPU comms).

Develop production-grade, high-performance software features with rigorous reliability, observability, and security.

Define performance goals and success metrics; design benchmarks and conduct large-scale experiments to validate throughput, latency, and tail behavior.

Collaborate with GPU platform, storage, database, and control-plane teams to deliver end-to-end solutions and influence OCI-wide network architecture and standards.

Mentor engineers, provide technical leadership/reviews, and contribute to long-term roadmap and technical strategy.

Qualifications Required:

Strong software engineering background with deep understanding of data structures and algorithms with demonstrated ability to optimize for high scale, low latency, and high throughput in large scale systems.

Experience in developing, shipping and operating high performance production code.

Demonstrated ability to lead technically, mentor others, and deliver results in ambiguous, complex problem spaces.

BS/MS in Computer Science, Electrical/Computer Engineering, or equivalent practical experience.

Preferred:

Experience with RDMA networking (RoCE and/or InfiniBand), including congestion control, reliability, and performance tuning at scale.

Familiarity with AI/HPC stacks and workloads: NCCL/RCCL/MPI, Slurm or other schedulers, GPU communication patterns, collective operations, and large-scale training jobs.

Experience integrating GPU Direct and NVMe-oF access in production.

Hands-on with observability and performance tooling (e.g., eBPF, perf, flame graphs, switch/NIC telemetry) and SLO-driven operations at scale.

Compensation and Benefits

US: Hiring Range in USD from: $96,800 to $223,400 per annum. May be eligible for bonus and equity.

Oracle maintains broad salary ranges for its roles in order to account for variations in knowledge, skills, experience, market conditions and locations, as well as reflect Oracle’s differing products, industries and lines of business.

Candidates are typically placed into the range based on the preceding factors as well as internal peer equity.

Oracle US offers a comprehensive benefits package which includes medical, dental, and vision insurance; disability and life insurance; 401(k) with company match; paid time off and holidays; and other voluntary benefits.

The role will generally accept applications for at least three calendar days from the posting date or as long as the job remains posted.

Career Level - IC4

#J-18808-Ljbffr