Logo
DigitalOcean

Principal Engineer, Inference Service

DigitalOcean, Boston, Massachusetts, us, 02298

Save Job

Join to apply for the

Principal Engineer, Inference Service

role at

DigitalOcean Dive in and do the best work of your career at DigitalOcean. Journey alongside a strong community of top talent who are relentless in their drive to build the simplest scalable cloud. If you have a growth mindset, naturally like to think big and bold, and are energized by the fast-paced environment of a true industry disruptor, you’ll find your place here. We value winning together—while learning, having fun, and making a profound difference for the dreamers and builders in the world. We’re seeking an experienced Principal Software Engineer to drive the design, development and scaling of our Large Language Model (LLM) inference services. As a Principal Software Engineer at DigitalOcean, you will join a dynamic team dedicated to revolutionizing cloud computing and AI. This team will be building a new product that will bring our famed DigitalOcean Simplicity to the world of LLM hosting, serving, and optimization. In this role, you will build systems for inference serving of popular open source / open weights LLMs as well as custom models, develop novel techniques for optimizing these models and scale the platform to handle millions of users across the globe. What You’ll Do

Design and implement an inference platform for serving large language models optimized for the various GPU platforms they will be run on. Develop and shepherd complex AI and cloud engineering projects through the entire product development lifecycle (PDLC) - ideation, product definition, experimentation, prototyping, development, testing, release, and operations. Optimize runtime and infrastructure layers of the inference stack for best model performance. Build native cross platform inference support across NVIDIA and AMD GPUs for a variety of model architectures. Contribute to open source inference engines to make them perform better on DigitalOcean cloud. Build tooling and observability to monitor system health, and build auto tuning capabilities. Build benchmarking frameworks to test model serving performance to guide system and infrastructure tuning efforts. Mentor engineers on inference systems, GPU infrastructure, and distributed systems best practices. Indicators Of a Good Fit

10+ years of experience in software engineering, including 2+ years building AI/ML technologies (ideally related to LLM hosting and inference). Enduring interest in distributed systems design, AI/ML, and implementation at scale in the cloud. Deep expertise in cloud computing platforms and modern AI/ML technologies. Experience with modern LLMs, ideally related to hosting, serving, and optimizing such models. Experience with one or more inference engines would be a bonus: vLLM, SGLang, Modular Max, etc. Experience researching, evaluating, and building with open source technologies. Proficiency in programming languages commonly used in cloud development, such as Python and Go. Experience with various GPU platforms from AMD and NVIDIA and associated toolsets for tuning, configuring, and accelerating workloads on them would be ideal, but not required (e.g., CUDA and ROCm). A strong sense of ownership and a drive to figure out and resolve any issues preventing you and your team from delivering value to your customers. An appreciation for process and developing cross-disciplinary collaboration between engineering, operations, support, and product groups. Familiarity with end-to-end quality best practices and their implementation. Experience coordinating with partner teams across time zones and geographies. Experience with infrastructure as code (IaC) tools like Terraform or Ansible. A passion for coaching and mentoring junior software engineers. Why You’ll Like Working for DigitalOcean

We innovate with purpose. You’ll be part of a cutting-edge technology company with an upward trajectory, who are proud to simplify cloud and AI so builders can spend more time creating software that changes the world. You will be a Shark who thinks big, bold, and scrappy, like an owner with a bias for action and a strong sense of responsibility for customers, products, employees, and decisions. We prioritize career development. You’ll do the best work of your career. You will work with some of the smartest and most interesting people in the industry. We are a high-performance organization that will always challenge you to think big. Our organizational development team will provide resources to ensure you keep growing. We provide reimbursement for relevant conferences, training, and education. All employees have access to LinkedIn Learning's 10,000+ courses to support continued growth and development. We care about your well-being. We provide a competitive array of benefits to support you, including Employee Assistance Program, Local Employee Meetups, and flexible time off policy, with benefits varying by location where applicable. We reward our employees. The salary range for this position is between $206,000 - $250,000 and may include a bonus and equity compensation; eligible employees may participate in an Employee Stock Purchase Program. We value diversity and inclusion. We are an equal-opportunity employer and do not discriminate on any protected characteristic. This is a remote role. Other

Seniority level: Mid-Senior level Employment type: Full-time Job function: Engineering and Information Technology Industries: Internet Publishing

#J-18808-Ljbffr