Cisco Systems, Inc.
Principal Engineer - HPC, AI Infrastructure
Cisco Systems, Inc., San Jose, California, United States, 95199
Location:
San Jose, California, US
Area of Interest
Compensation Range
291500 USD - 369100 USD
Job Type
Professional
AI or Artificial Intelligence, Internet & Mass Scale Infrastructure
Job Id
1445895
This position requires a hybrid working schedule in the San Jose or Milpitas office.
Meet the Team
We are an innovation team on a mission to transform how enterprises harness AI. Operating with the agility of a startup and the focus of an incubator, we’re building a tight-knit group of AI and infrastructure experts driven by bold ideas and a shared goal: to rethink systems from the ground up and deliver breakthrough solutions that redefine what's possible — faster, leaner, and smarter.
We thrive in a fast-paced, experimentation-rich environment where new technologies aren’t just welcome — they’re expected. Here, you'll work side-by-side with seasoned engineers, architects, and thinkers to craft the kind of iconic products that can reshape industries and unlock entirely new models of operation for the enterprise.
If you're energized by the challenge of solving hard problems, love working at the edge of what's possible, and want to help shape the future of AI infrastructure — we'd love to meet you.
Impact
As High-performance AI compute engineer , you will be instrumental in defining and delivering the next generation of enterprise-grade AI infrastructure.As a principal engineer within our GPU and CUDA Runtime team, you will play a critical role in shaping the future of high-performance compute infrastructure. Your contributions will directly influence the performance, reliability, and scalability of large-scale GPU-accelerated workloads, powering mission-critical applications across AI/ML, scientific computing, and real-time simulation.
You will be responsible for developing low-level components that bridge user space and kernel space, optimizing memory and data transfer paths, and enabling cutting-edge interconnect technologies like NVLink and RDMA. Your work will ensure that systems efficiently utilize GPU hardware to its full potential, minimizing latency, maximizing throughput, and improving developer experience at scale.
This role offers the opportunity to impact both open and proprietary systems, working at the intersection of device driver innovation, runtime system design, and platform integration.
KEY RESPONSIBILITIES
Design, develop, and maintain device drivers and runtime components for GPU and network components of the systems.
Working with kernel and platform components to build efficient memory management paths using pinned memory, peer-to-peer transfers, and unified memory.
Optimize data movement using high-speed interconnects such as RDMA, InfiniBand, NVLink, and PCIe, with a focus on reducing latency and increasing bandwidth.
Implement and fine-tune GPU memory copy paths with awareness of NUMA topologies and hardware coherency.
Develop instrumentation and telemetry collection mechanisms to monitor GPU and memory performance without impacting runtime workloads.
Contribute to internal tools and libraries for GPU system introspection, profiling, and debugging.
Provide technical mentorship and peer reviews, and guide junior engineers on best practices for low-level GPU development.
Stay current with evolving GPU architectures, memory technologies, and industry standards.
Minimum Qualifications :
10+ yearsof experience in systems programming, ideally with5+ yearsfocused onCUDA/GPU driver and runtime internals.
Minimum of 5+ years of experience with kernel-space development, ideally inLinux kernel modules, device drivers, or GPU runtime libraries (e.g., CUDA, ROCm, or OpenCL runtimes).
Experience working withNVIDIA GPU architecture, CUDA toolchains, and performance tools (Nsight, CUPTI, etc.).
Experience optimizing forNVLink, PCIe, Unified Memory (UM), andNUMA architectures.
Strong grasp ofRDMA,InfiniBand, andGPUDirecttechnologies and their using in frameworks like UCX.
Minimum of 8+ years of experience programming within C/C++with low-level systems proficiency (memory management, synchronization, cache coherence).
Bachelor' degree in STEM related field
Preferred Qualifications
Deep understanding ofHPC workloads, performance bottlenecks, andcompute/memory tradeoffs.
Expertise inzero-copy memory access,pinned memory,peer-to-peer memory copy, anddevice memory lifetimes.
Strong understanding ofmulti-threadedandasynchronous programming models.
Familiarity with python and AI framework like pytorch.
Familiarity withassembly or PTX/SASSfor debugging or optimizing CUDA kernels.
Familiarity withNVMe storage offloads,IOAT/DPDK, or other DMA-based acceleration methods.
Familiarity withValgrind,cuda-memcheck,gdb, and profiling withNsight Compute/Systems.
Proficiency withperf,ftrace,eBPF, and other Linux tracing tools.
PhD is a plus, especially with research in GPU systems, compilers, or HPC.
Message to applicants applying to work in the U.S. and/or Canada:
When available, the salary range posted for this position reflects the projected hiring range for new hire, full-time salaries in U.S. and/or Canada locations, not including equity or benefits. For non-sales roles the hiring ranges reflect base salary only; employees are also eligible to receive annual bonuses. Hiring ranges for sales positions include base and incentive compensation target. Individual pay is determined by the candidate's hiring location and additional factors, including but not limited to skillset, experience, and relevant education, certifications, or training. Applicants may not be eligible for the full salary range based on their U.S. or Canada hiring location. The recruiter can share more details about compensation for the role in your location during the hiring process.
U.S. employees haveaccess to quality medical, dental and vision insurance, a 401(k) plan with a Cisco matching contribution, short and long-term disability coverage, basic life insurance and numerous wellbeing offerings.
Employees receive up to twelve paid holidays per calendar year, which includes one floating holiday (for non-exempt employees), plus a day off for their birthday. Non-Exempt new hires accrue up to 16 days ofvacation time off each year, at a rate of 4.92 hours per pay period. Exempt new hires participate in Cisco’s flexible Vacation Time Offpolicy, which does not place a defined limit on how much vacation time eligible employees may use, but is subject to availability and some business limitations. All new hires are eligible for Sick Time Off subject to Cisco’s Sick Time Off Policy and will have eighty (80) hours of sick time off provided on their hire date and on January 1st of each year thereafter. Up to 80 hours ofunused sick timewill be carried forwardfrom one calendar yearto the nextsuch that the maximum number of sick time hours an employee may have available is160 hours. Employees in Illinois have a unique time off program designed specifically with local requirements in mind. All employees also have access to paid time away to deal with critical or emergency issues. We offer additional paid time to volunteer and give back to the community.
Employees on sales plans earn performance-based incentive pay on top of their base salary, which is split between quota and non-quota components. For quota-based incentive pay, Cisco typically pays as follows:
.75% of incentive target for each 1% of revenue attainment up to 50% of quota;
1.5% of incentive target for each 1% of attainment between 50% and 75%;
1% of incentive target for each 1% of attainment between 75% and 100%; and once performance exceeds 100% attainment, incentive rates are at or above 1% for each 1% of attainment with no cap on incentive compensation.
For non-quota-based sales performance elements such as strategic sales objectives, Cisco may pay up to 125% of target. Cisco sales plans do not have a minimum threshold of performance for sales incentive compensation to be paid.
Sign up to receive notifications of similar jobs
#J-18808-Ljbffr