Jobs via Dice
Overview
Join to apply for the
Principal AI Network Architect
role at
Jobs via Dice . Microsoft Silicon, Cloud Hardware, and Infrastructure Engineering (SCHIE) is the team behind Microsoft's expanding Cloud Infrastructure and responsible for powering Microsoft's "Intelligent Cloud" mission. SCHIE delivers the core infrastructure and foundational technologies for Microsoft's over 200 online businesses including Bing, MSN, Office 365, Xbox Live, Teams, OneDrive, and the Microsoft Azure platform globally with our server and data center infrastructure, security and compliance, operations, globalization, and manageability solutions. Our focus is on smart growth, high efficiency, and delivering a trusted experience to customers and partners worldwide and we are looking for passionate engineers to help achieve that mission. We are looking for a
Principal AI Network Architect
to join the team. Responsibilities
Technology Leadership Spearhead architectural definition and innovation for next-generation GPU and AI accelerator platforms, with a focus on ultra-high bandwidth, low-latency backend networks. Drive system-level integration across compute, storage, and interconnect domains to support scalable AI training workloads. Cross-Functional Collaboration Partner with silicon, firmware, and datacenter engineering teams to co-design infrastructure that meets performance, reliability, and deployment goals. Influence platform decisions across rack, chassis, and pod-level implementations. Technology Partnerships Cultivate deep technical relationships with silicon vendors, optics suppliers, and switch fabric providers to co-develop differentiated solutions. Represent Microsoft in joint architecture forums and technical workshops. Architectural Clarity Evaluate and articulate tradeoffs across electrical, mechanical, thermal, and signal integrity domains. Frame decisions in terms of TCO, performance, scalability, and deployment risk. Lead design reviews and contribute to PRDs and system specifications. Industry Influence Shape the direction of hyperscale AI infrastructure by engaging with standards bodies (e.g., IEEE 802.3), influencing component roadmaps, and driving adoption of novel interconnect protocols and topologies. Qualifications
Required Qualifications
Bachelor's Degree in Electrical Engineering, Computer Engineering, Mechanical Engineering, or related field AND 8+ years technical engineering experience OR Master's Degree in Electrical Engineering, Computer Engineering, Mechanical Engineering, or related field AND 7+ years technical engineering experience OR equivalent experience 5+ years of experience in designing AI backend networks and integrating them into large-scale GPU systems. Other Requirements
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud Background Check upon hire/transfer and every two years thereafter. Preferred Qualifications
Proven expertise in system architecture across compute, networking, and accelerator domains. Deep understanding of RDMA protocols (RoCE, InfiniBand), congestion control (DCQCN), and Layer 2/3 routing. Experience with optical interconnects (e.g., PSM, WDM), link budget analysis, and transceiver integration. Familiarity with signal integrity modeling, link training, and physical layer optimization. Experience architecting backend networks for AI training and Inference workloads, including Hamiltonian cycle traffic and collective operations (e.g., all-reduce, all-gather). Hands-on design of high-radix switches (400Gbps per port), orthogonal chassis, and cabled backplanes. Knowledge of chip-to-chip and chip-to-module interfaces, including error correction and equalization techniques. Experience with custom NIC IPs and transport layers for secure, reliable packet delivery. Familiarity with AI model execution pipelines and their impact on pod-level network design and latency SLAs. Prior contributions to hyperscale deployments or cloud-scale AI infrastructure programs. Compensation and Location
Hardware Engineering IC5 - The typical base pay range for this role across the U.S. is USD $139,900 - $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 - $304,200 per year. Microsoft will accept applications for the role until September 7, 2025. Equal Opportunity
Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request via the Accommodation request form. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. # SCHIE #azurehwjobs #CHSE
#J-18808-Ljbffr
Join to apply for the
Principal AI Network Architect
role at
Jobs via Dice . Microsoft Silicon, Cloud Hardware, and Infrastructure Engineering (SCHIE) is the team behind Microsoft's expanding Cloud Infrastructure and responsible for powering Microsoft's "Intelligent Cloud" mission. SCHIE delivers the core infrastructure and foundational technologies for Microsoft's over 200 online businesses including Bing, MSN, Office 365, Xbox Live, Teams, OneDrive, and the Microsoft Azure platform globally with our server and data center infrastructure, security and compliance, operations, globalization, and manageability solutions. Our focus is on smart growth, high efficiency, and delivering a trusted experience to customers and partners worldwide and we are looking for passionate engineers to help achieve that mission. We are looking for a
Principal AI Network Architect
to join the team. Responsibilities
Technology Leadership Spearhead architectural definition and innovation for next-generation GPU and AI accelerator platforms, with a focus on ultra-high bandwidth, low-latency backend networks. Drive system-level integration across compute, storage, and interconnect domains to support scalable AI training workloads. Cross-Functional Collaboration Partner with silicon, firmware, and datacenter engineering teams to co-design infrastructure that meets performance, reliability, and deployment goals. Influence platform decisions across rack, chassis, and pod-level implementations. Technology Partnerships Cultivate deep technical relationships with silicon vendors, optics suppliers, and switch fabric providers to co-develop differentiated solutions. Represent Microsoft in joint architecture forums and technical workshops. Architectural Clarity Evaluate and articulate tradeoffs across electrical, mechanical, thermal, and signal integrity domains. Frame decisions in terms of TCO, performance, scalability, and deployment risk. Lead design reviews and contribute to PRDs and system specifications. Industry Influence Shape the direction of hyperscale AI infrastructure by engaging with standards bodies (e.g., IEEE 802.3), influencing component roadmaps, and driving adoption of novel interconnect protocols and topologies. Qualifications
Required Qualifications
Bachelor's Degree in Electrical Engineering, Computer Engineering, Mechanical Engineering, or related field AND 8+ years technical engineering experience OR Master's Degree in Electrical Engineering, Computer Engineering, Mechanical Engineering, or related field AND 7+ years technical engineering experience OR equivalent experience 5+ years of experience in designing AI backend networks and integrating them into large-scale GPU systems. Other Requirements
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud Background Check upon hire/transfer and every two years thereafter. Preferred Qualifications
Proven expertise in system architecture across compute, networking, and accelerator domains. Deep understanding of RDMA protocols (RoCE, InfiniBand), congestion control (DCQCN), and Layer 2/3 routing. Experience with optical interconnects (e.g., PSM, WDM), link budget analysis, and transceiver integration. Familiarity with signal integrity modeling, link training, and physical layer optimization. Experience architecting backend networks for AI training and Inference workloads, including Hamiltonian cycle traffic and collective operations (e.g., all-reduce, all-gather). Hands-on design of high-radix switches (400Gbps per port), orthogonal chassis, and cabled backplanes. Knowledge of chip-to-chip and chip-to-module interfaces, including error correction and equalization techniques. Experience with custom NIC IPs and transport layers for secure, reliable packet delivery. Familiarity with AI model execution pipelines and their impact on pod-level network design and latency SLAs. Prior contributions to hyperscale deployments or cloud-scale AI infrastructure programs. Compensation and Location
Hardware Engineering IC5 - The typical base pay range for this role across the U.S. is USD $139,900 - $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 - $304,200 per year. Microsoft will accept applications for the role until September 7, 2025. Equal Opportunity
Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request via the Accommodation request form. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. # SCHIE #azurehwjobs #CHSE
#J-18808-Ljbffr