Relace
About Us Relace is building the models and infrastructure that code agents reach for. We power the fastest model on OpenRouter (10,000 tok/s) and deliver optimized small language models designed for retrieval, application, and core code generation functions.
Read the overview of this opportunity to understand what skills, including and relevant soft skills and software package proficiencies, are required. Our technology supports some of the world’s fastest-moving companies — including Lovable, Figma, and Vercel — as they deploy and scale code generation to hundreds of millions of users. We recently raised our Series A from a16z, and we’re growing quickly. Our team is made up of mathematicians, physicists, and computer scientists who are deeply passionate about their craft. If you thrive on ambitious technical problems, care about elegant systems design, and want to build the foundation of how code gets written at scale, this is the place for you. The Role We’re looking for a
Machine Learning Engineer
who loves getting close to the metal. This is a hands-on engineering role focused on making models faster, more efficient, and more reliable through low-level optimizations and smart systems design. The ideal candidate is excited by CUDA kernels, memory layouts, GPU scheduling, and squeezing performance out of complex training and inference workloads. They should be just as comfortable optimizing compute and networking paths as they are working alongside research teams to productionize new architectures. This is a role for someone who enjoys deep performance tuning, understands the realities of running large-scale ML systems, and thrives in fast-moving, high-leverage environments. Requirements
Strong background in systems-level ML engineering.
Experience with CUDA, GPU kernel optimization, and performance tuning.
Fluency in Python and at least one systems language (C++ or Rust preferred).
Familiarity with distributed training frameworks (e.g., PyTorch, JAX, DeepSpeed, or similar).
Experience working with large-scale training or inference infrastructure.
Understanding of memory management, parallelization, and hardware-aware model optimization.
2+ years of experience working in ML infrastructure or performance-critical environments.
Willingness to work in-person from our SF office in FiDi.
#J-18808-Ljbffr
Read the overview of this opportunity to understand what skills, including and relevant soft skills and software package proficiencies, are required. Our technology supports some of the world’s fastest-moving companies — including Lovable, Figma, and Vercel — as they deploy and scale code generation to hundreds of millions of users. We recently raised our Series A from a16z, and we’re growing quickly. Our team is made up of mathematicians, physicists, and computer scientists who are deeply passionate about their craft. If you thrive on ambitious technical problems, care about elegant systems design, and want to build the foundation of how code gets written at scale, this is the place for you. The Role We’re looking for a
Machine Learning Engineer
who loves getting close to the metal. This is a hands-on engineering role focused on making models faster, more efficient, and more reliable through low-level optimizations and smart systems design. The ideal candidate is excited by CUDA kernels, memory layouts, GPU scheduling, and squeezing performance out of complex training and inference workloads. They should be just as comfortable optimizing compute and networking paths as they are working alongside research teams to productionize new architectures. This is a role for someone who enjoys deep performance tuning, understands the realities of running large-scale ML systems, and thrives in fast-moving, high-leverage environments. Requirements
Strong background in systems-level ML engineering.
Experience with CUDA, GPU kernel optimization, and performance tuning.
Fluency in Python and at least one systems language (C++ or Rust preferred).
Familiarity with distributed training frameworks (e.g., PyTorch, JAX, DeepSpeed, or similar).
Experience working with large-scale training or inference infrastructure.
Understanding of memory management, parallelization, and hardware-aware model optimization.
2+ years of experience working in ML infrastructure or performance-critical environments.
Willingness to work in-person from our SF office in FiDi.
#J-18808-Ljbffr