Logo
Oracle

Principal Software Engineer, Networking - AI Infrastructure Innovation

Oracle, Redwood City, California, United States, 94061

Save Job

Principal Software Engineer, Networking - AI Infrastructure Innovation OCI (Oracle Cloud) AI Infrastructure Innovation team is pioneering the creation of next‑generation AI/HPC networking for GPU superclusters at massive scale. Our mission is to design and deliver state‑of‑the‑art RDMA‑based networking—spanning frontend and backend fabrics—that enables customers to achieve high performance for AI training and inference. You will define architecture, lead complex system design, and implement innovative networking software that advances RDMA for GPUs and accelerates storage access. If you thrive at the intersection of large‑scale distributed systems, high‑speed networking, and AI workloads, this role offers the opportunity to push the boundaries of what’s possible.

Responsibilities

Lead architecture, system design, and implementation for high‑performance RDMA solutions across OCI’s AI/HPC platforms, including frontend and backend fabrics.

Innovate on network and TCP performance and identify changes required across (Kernel, NIC, switch, transport, protocol, storage, GPU comms).

Develop production‑grade, high‑performance software features with rigorous reliability, observability, and security.

Define performance goals and success metrics; design benchmarks and conduct large‑scale experiments to validate throughput, latency, and tail behavior.

Collaborate with GPU platform, storage, database, and control‑plane teams to deliver end‑to‑end solutions and influence OCI‑wide network architecture and standards.

Mentor engineers, provide technical leadership/reviews, and contribute to long‑term roadmap and technical strategy.

Required Qualifications

Strong software engineering background with deep understanding of data structures and algorithms with demonstrated ability to optimize for high scale, low latency, and high throughput in large‑scale systems.

Experience in developing, shipping and operating high‑performance production code.

Demonstrated ability to lead technically, mentor others, and deliver results in ambiguous, complex problem spaces.

BS/MS in Computer Science, Electrical/Computer Engineering, or equivalent practical experience.

Preferred Qualifications

Experience with RDMA networking (RoCE and/or InfiniBand), including congestion control, reliability, and performance tuning at scale.

Familiarity with AI/HPC stacks and workloads: NCCL/RCCL/MPI, Slurm or other schedulers, GPU communication patterns, collective operations, and large‑scale training jobs.

Experience integrating GPU Direct and NVMe‑oF access in production.

Hands‑on with observability and performance tooling (e.g., eBPF, perf, flame graphs, switch/NIC telemetry) and SLO‑driven operations at scale.

EEO Statement Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability, and protected veteran status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Salary and Benefits US: Hiring Range in USD from: $96,800 – $223,400 per year. May be eligible for bonus and equity.

Medical, dental, and vision insurance

Short‑term and long‑term disability

Life insurance and AD&D

401(k) Savings and Investment Plan with company match

Paid time off: Flexible vacation, paid sick leave, paid parental leave, adoption assistance

Paid holidays (11)

Employee Stock Purchase Plan

Voluntary benefits including auto, homeowner and pet insurance

#J-18808-Ljbffr