Logo
Microsoft

Principal AI Network Architect

Microsoft, San Diego, California, United States, 92189

Save Job

Overview

Microsoft Silicon, Cloud Hardware and Infrastructure Engineering (SCHIE) is the team behind Microsoft’s expanding Cloud Infrastructure responsible for powering Microsoft’s Intelligence Cloud mission. SCHIE delivers core infrastructure and foundational technologies for Microsoft’s online businesses including Bing, MSN, Office 365, Xbox Live, Teams, OneDrive, and the Microsoft Azure platform globally with our server and data center infrastructure, security and compliance, operations, globalization, and manageability solutions. Our focus is on smart growth, high efficiency, and delivering a trusted experience to customers and partners worldwide. We are looking for passionate engineers to help achieve that mission. As Microsoft\'s cloud business grows, the ability to deploy new offerings and hardware infrastructure on time, in high volume, with high quality and lowest cost is paramount. The Cloud Hardware Systems Engineering (CHSE) team defines and delivers operational measures of success for hardware manufacturing, improving planning, quality, delivery, scale and sustainability related to Microsoft cloud hardware. We are seeking seasoned engineers with a passion for customer‑focused solutions and industry knowledge to envision and implement future technical solutions that will manage and optimize the Cloud infrastructure. We are looking for a Principal AI Network Architect to join the team. Responsibilities

Technology Leadership:

Spearhead architectural definition and innovation for next‑generation GPU and AI accelerator platforms, focusing on ultra‑high bandwidth, low‑latency backend networks. Drive system‑level integration across compute, storage, and interconnect domains to support scalable AI training workloads. Cross‑Functional Collaboration:

Partner with silicon, firmware, and datacenter engineering teams to co‑design infrastructure that meets performance, reliability, and deployment goals. Influence platform decisions across rack, chassis, and pod‑level implementations. Technology Partnerships:

Cultivate deep technical relationships with silicon vendors, optics suppliers, and switch fabric providers to co‑develop differentiated solutions. Represent Microsoft in joint architecture forums and technical workshops. Architectural Clarity:

Evaluate and articulate tradeoffs across electrical, mechanical, thermal, and signal integrity domains. Frame decisions in terms of TCO, performance, scalability, and deployment risk. Lead design reviews and contribute to PRDs and system specifications. Industry Influence:

Shape the direction of hyperscale AI infrastructure by engaging with standards bodies (e.g., IEEE 802.3), influencing component roadmaps, and driving adoption of novel interconnect protocols and topologies. Qualifications

Required Qualifications:

Bachelor\'s Degree in Electrical Engineering, Computer Engineering, Mechanical Engineering, or related field AND 8+ years of technical engineering experience OR Master\'s Degree in Electrical Engineering, Computer Engineering, Mechanical Engineering, or related field AND 7+ years of technical engineering experience OR equivalent experience 5+ years of experience in designing AI backend networks and integrating them into large‑scale GPU systems. Other Requirements:

Ability to meet Microsoft, customer and/or government security screening requirements. These include, but are not limited to, the Microsoft Cloud Background Check on hire/transfer and every two years thereafter. Preferred Qualifications:

Proven expertise in system architecture across compute, networking, and accelerator domains. Deep understanding of RDMA protocols (RoCE, InfiniBand), congestion control (DCQCN), and Layer 2/3 routing. Experience with optical interconnects (e.g., PSM, WDM), link budget analysis, and transceiver integration. Familiarity with signal integrity modeling, link training, and physical layer optimization. Experience architecting backend networks for AI training and inference workloads, including Hamiltonian cycle traffic and collective operations (e.g., all‑reduce, all‑gather). Hands‑on design of high‑radix switches (≥400Gbps per port), orthogonal chassis, and cabled backplanes. Knowledge of chip‑to‑chip and chip‑to‑module interfaces, including error correction and equalization techniques. Experience with custom NIC IPs and transport layers for secure, reliable packet delivery. Familiarity with AI model execution pipelines and their impact on pod‑level network design and latency SLAs. Prior contributions to hyperscale deployments or cloud‑scale AI infrastructure programs. Compensation and Benefits

Hardware Engineering IC5 - The typical base pay range for this role across the U.S. is USD $139,900 - $274,800 per year. A different range applies to the San Francisco Bay Area and New York City metropolitan area: USD $188,000 - $304,200 per year. Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: careers.microsoft.com/us/en/us-corporate-pay Microsoft will accept applications for the role until September 20, 2025. Notes

#SCHIE #azurehwjobs #CHSE Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

#J-18808-Ljbffr