MongoDB
Lead Engineer, Inference Platform Palo Alto
MongoDB, Palo Alto, California, United States, 94306
Overview
MongoDB’s mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. We enable organizations of all sizes to easily build, scale, and run modern applications by helping them modernize legacy workloads, embrace innovation, and unleash AI. Our industry-leading developer data platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available in more than 115 regions across AWS, Google Cloud, and Microsoft Azure. Atlas allows customers to build and run applications anywhere—on premises, or across cloud providers. With offices worldwide and over 175,000 new developers signing up to use MongoDB every month, it’s no wonder that leading organizations, like Samsung and Toyota, trust MongoDB to build next-generation, AI-powered applications. About the Role
We’re looking for a Lead Engineer, Inference Platform to join our team building the inference platform for embedding models that power semantic search, retrieval, and AI-native features across MongoDB Atlas. This role is part of the broader Search and AI Platform team and involves close collaboration with AI engineers and researchers from our Voyage.ai acquisition, who are developing industry-leading embedding models. Together, we’re building the infrastructure that enables real-time, high-scale, and low-latency inference — all deeply integrated into Atlas and optimized for developer experience. As a Lead Engineer, Inference Platform, you’ll be hands-on with design and implementation, while working with engineers across experience levels to build a robust, scalable system. The focus is on latency, availability, observability, and scalability in a multi-tenant, cloud-native environment. You will also be responsible for guiding the technical direction of the team, mentoring junior engineers, and ensuring the delivery of high-quality, impactful features. We are looking to speak to candidates who are based in Palo Alto for our hybrid working model.
#J-18808-Ljbffr
MongoDB’s mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. We enable organizations of all sizes to easily build, scale, and run modern applications by helping them modernize legacy workloads, embrace innovation, and unleash AI. Our industry-leading developer data platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available in more than 115 regions across AWS, Google Cloud, and Microsoft Azure. Atlas allows customers to build and run applications anywhere—on premises, or across cloud providers. With offices worldwide and over 175,000 new developers signing up to use MongoDB every month, it’s no wonder that leading organizations, like Samsung and Toyota, trust MongoDB to build next-generation, AI-powered applications. About the Role
We’re looking for a Lead Engineer, Inference Platform to join our team building the inference platform for embedding models that power semantic search, retrieval, and AI-native features across MongoDB Atlas. This role is part of the broader Search and AI Platform team and involves close collaboration with AI engineers and researchers from our Voyage.ai acquisition, who are developing industry-leading embedding models. Together, we’re building the infrastructure that enables real-time, high-scale, and low-latency inference — all deeply integrated into Atlas and optimized for developer experience. As a Lead Engineer, Inference Platform, you’ll be hands-on with design and implementation, while working with engineers across experience levels to build a robust, scalable system. The focus is on latency, availability, observability, and scalability in a multi-tenant, cloud-native environment. You will also be responsible for guiding the technical direction of the team, mentoring junior engineers, and ensuring the delivery of high-quality, impactful features. We are looking to speak to candidates who are based in Palo Alto for our hybrid working model.
#J-18808-Ljbffr