Logo
Oracle

Sr Principal Software Engineer, Networking - AI Infrastructure Innovation

Oracle, Seattle, Washington, United States, 98127

Save Job

OCI (Oracle Cloud) AI Infrastructure Innovation team is pioneering the creation of next-generation AI/HPC networking for GPU superclusters at massive scale. Our mission is to design and deliver state-of-the-art RDMA-based networking—spanning frontend and backend fabrics—that enables customers to achieve high performance for AI training and inference. You will define architecture, lead complex system design, and implement innovative networking software that advances RDMA for GPUs and accelerates storage access. If you thrive at the intersection of large-scale distributed systems, high-speed networking, and AI workloads, this role offers the opportunity to push the boundaries of what’s possible.

Responsibilities

Lead architecture, system design, and implementation for high-performance RDMA solutions across OCI’s AI/HPC platforms, including frontend and backend fabrics.

Innovate on network and TCP performance and identify changes required across (Kernel, NIC, switch, transport, protocol, storage, GPU comms)

Develop production-grade, high-performance software features with rigorous reliability, observability, and security.

Define performance goals and success metrics; design benchmarks and conduct large-scale experiments to validate throughput, latency, and tail behavior.

Collaborate with GPU platform, storage, database, and control-plane teams to deliver end-to-end solutions and influence OCI-wide network architecture and standards.

Mentor engineers, provide technical leadership/reviews, and contribute to long‑term roadmap and technical strategy.

Qualifications Certain US customer or client‑facing roles may be required to comply with applicable requirements, such as immunization and occupational health mandates.

Range and benefit information provided in this posting are specific to the stated locations only

US: Hiring Range in USD from: $120,100 to $251,600 per annum. May be eligible for bonus, equity, and compensation deferral.

Oracle maintains broad salary ranges for its roles in order to account for variations in knowledge, skills, experience, market conditions and locations, as well as reflect Oracle’s differing products, industries and lines of business. Candidates are typically placed into the range based on the preceding factors as well as internal peer equity.

Oracle US offers a comprehensive benefits package which includes the following:

Medical, dental, and vision insurance, including expert medical opinion

Short term disability and long term disability

Life insurance and AD&D

Supplemental life insurance (Employee/Spouse/Child)

Health care and dependent care Flexible Spending Accounts

Pre‑tax commuter and parking benefits

401(k) Savings and Investment Plan with company match

Paid time off: Flexible Vacation is provided to all eligible employees assigned to a salaried (non‑overtime eligible) position. Accrued Vacation is provided to all other employees eligible for vacation benefits. For employees working at least 35 hours per week, the vacation accrual rate is 13 days annually for the first three years of employment and 18 days annually for subsequent years of employment. Vacation accrual is prorated for employees working between 20 and 34 hours per week. Employees working fewer than 20 hours per week are not eligible for vacation.

11 paid holidays

Paid sick leave: 72 hours of paid sick leave upon date of hire. Refreshes each calendar year. Unused balance will carry over each year up to a maximum cap of 112 hours.

Paid parental leave

Adoption assistance

Employee Stock Purchase Plan

Financial planning and group legal

Voluntary benefits including auto, homeowner and pet insurance

The role will generally accept applications for at least three calendar days from the posting date or as long as the job remains posted.

Career Level - IC5

Responsibilities Required:

Deep experience with RDMA networking (RoCE and/or InfiniBand), including congestion control, reliability, and performance tuning at scale.

At least 10 years of strong software engineering background delivering high‑performance features in large distributed systems

Expertise with networking protocols and systems: TCP/IP, IPv4/IPv6, DNS, DHCP.

Knowledge of L2/L3 and data center networking: MPLS, BGP/OSPF/IS‑IS; experience with VXLAN and EVPN is a plus.

High‑speed packet processing and/or HPC networking experience.

Strong understanding of data structures and algorithms with demonstrated ability to optimize for high scale, low latency, and high throughput.

Experience with storage technologies relevant to high‑performance environments, such as NVMe/NVMe‑oF, block storage, journaling, IO path optimization, and performance troubleshooting across compute, network, and storage.

Demonstrated ability to lead technically, mentor others, and deliver results in ambiguous, complex problem spaces.

BS/MS in Computer Science, Electrical/Computer Engineering, or equivalent practical experience.

Preferred:

Familiarity with AI/HPC stacks and workloads: NCCL/RCCL/MPI, Slurm or other schedulers, GPU communication patterns, collective operations, and large‑scale training jobs.

Experience integrating GPU Direct (RDMA) and remote NVMe access in production.

Hands‑on with observability and performance tooling (e.g., eBPF, perf, flame graphs, switch/NIC telemetry) and SLO‑driven operations at scale.

#J-18808-Ljbffr