Logo
ZipRecruiter

Principal Engineer, Training

ZipRecruiter, Sunnyvale, California, United States, 94087

Save Job

Job DescriptionJob Description CoreWeave is the AI Hyperscaler, delivering a cloud platform of cutting edge services powering the next wave of AI. Our technology provides enterprises and leading AI labs with the most performant, efficient and resilient solutions for accelerated computing. Since 2017, CoreWeave has operated a growing footprint of data centers covering every region of the US and across Europe. CoreWeave was ranked as one of the TIME100 most influential companies of 2024. As the leader in the industry, we thrive in an environment where adaptability and resilience are key. Our culture offers career-defining opportunities for those who excel amid change and challenge. If you're someone who thrives in a dynamic environment, enjoys solving complex problems, and is eager to make a significant impact, CoreWeave is the place for you. Join us, and be part of a team solving some of the most exciting challenges in the industry. CoreWeave powers the creation and delivery of the intelligence that drives innovation. What You'll Do: CoreWeave is seeking a Principal Engineer to be the hands-on technical leader for our next- Large-Scale Training Platform. As a senior individual contributor, you will architect and build the fastest, most cost-efficient, and most reliable GPU training services in the industry. You'll prototype new capabilities, drive engineering standards, and work side-by-side with product, orchestration, and hardware teams to turn CoreWeave's massive GPU fleet into the best place on earth to train frontier models. About the role: Technical Vision & Strategy

- Shape the technical roadmap for distributed training, data-pipeline efficiency, and model-parallel scaling. Evaluate emerging frameworks (DeepSpeed, Megatron-LM, FSDP, Alpa, etc.) and guide build-vs-buy decisions. Platform Architecture

- Design Kubernetes- control-plane components that launch and monitor multi-thousand-GPU training jobs. Implement optimizations such as tensor/pipeline parallelism, ZeRO optimizer stages, activation checkpointing, gradient compression, and elastic scaling. Build high-throughput data-ingestion paths (streaming, sharded datasets, RDMA, NVMe-over-Fabric) that keep GPUs saturated. Operational Excellence

- Create real-time observability, debugger hooks, and automated rollback / resume for long-running jobs. Develop cost-vs-time-to-train analytics so customers can pick the perfect hardware mix. Hands-on Development -

Write production code, reference implementations, and performance benchmarks. Lead design reviews and dive deep into bottlenecks across NCCL, InfiniBand, NVLink, SHARP, and PCIe topologies. Mentorship & Collaboration -

Coach engineers on large-scale distributed-training best practices. Work directly with lighthouse customers, helping them scale models to tens of billions of parameters. Who You Are: 10+ years building distributed systems or HPC/cloud services, with 4+ years focused on large-scale ML training or similar high-performance workloads. Demonstrated expertise in

data/model/tensor/pipeline

parallelism, mixed-precision training (BF16, FP8), and optimizer sharding (ZeRO, 1-D/2-D ZeRO-Infinity). Deep knowledge of PyTorch internals, NCCL/SHARP, RDMA, NUMA, and GPU interconnect topologies. Proven track record of squeezing every last TFLOP from multi-node GPU clusters and reducing total training time/cost for massive models. Fluency with Kubernetes (or Slurm/Ray) at production scale, plus CI/CD, service meshes, and observability stacks (Prometheus, Grafana, OpenTelemetry). Excellent communicator who influences architecture across teams and presents complex trade-offs to executives and customers. Bachelor's or Master's in CS, EE, or related field (or equivalent practical experience). : Code contributions to open-source training frameworks (DeepSpeed, Megatron-LM, FSDP, MosaicML, etc.). Experience running multi-region or exaFLOP-scale training jobs at a hyperscaler or AI research lab. Publications/talks on large-model optimization, gradient compression, or advanced checkpoint orchestration. Wondering if you're a good fit?

We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren't a 100% skill or experience match. Why CoreWeave? You will be the technical spearhead of an industry-defining training platform, partnering with world-class researchers and engineers who are pushing the boundaries of generative AI and foundation-model scale. If shaving weeks off pre-training cycles and inventing new techniques for billion-parameter models excites you, we'd love to chat. The base salary range for this role is $206,000 to $303,000.

The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility). What We Offer The range we've posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location. In addition to a competitive salary, we offer a variety of benefits to support your needs, including: Medical, dental, and vision insurance - 100% paid for by CoreWeave Company-paid Life Insurance Voluntary supplemental life insurance Short and long-term insurance Flexible Spending Account Health Savings Account Tuition Reimbursement Ability to Participate in Employee Stock Purchase Program (ESPP) Mental Wellness Benefits through Spring Health Family-Forming support provided by Carrot Paid Parental Leave Flexible, full-service childcare support with Kinside 401(k) with a generous employer match Flexible PTO Catered lunch each day in our office and data center locations A casual work environment A work culture focused on innovative disruption Our Workplace While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration California Consumer Privacy Act - California applicants only CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to , , , , , , , , , veteran status, or genetic information. As part of this commitment and consistent with the

Americans with Disabilities Act (ADA)

, CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship. If reasonable accommodation is needed, please contact:

careers@coreweave.com . Export Control Compliance This position requires access to export controlled information. To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. or , (ii) U.S. lawful permanent (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency. CoreWeave may, for legitimate business reasons, decline to pursue any export licensing process.

#J-18808-Ljbffr