Logo
Liquid AI, Inc.

Member of Technical Staff - Edge Inference Engineer

Liquid AI, Inc., Boston, Massachusetts, us, 02298

Save Job

Work With Us

At Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next. We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems. This Role Is For You If:

You are a highly skilled engineer with extensive experience in inference on embedded hardware and a deep understanding of CPU, NPU, and GPU architectures

Proficiency in building and enhancing edge inference stacks is essential

Strong ML Experience:

Proficiency in Python and PyTorch to effectively interface with the ML team at a deeply technical level

Hardware Awareness:

Must understand modern hardware architecture, including cache hierarchies and memory access patterns, and their impact on performance

Proficient in Coding:

Expertise in Python, C++, or Rust for AI-driven real-time embedded systems

Optimization of Low-Level Primitives:

Responsible for optimizing core primitives to ensure efficient model execution

Self-Guided and Ownership:

Ability to independently take a PyTorch model and inference requirements and deliver a fully optimized edge inference stack with minimal guidance

Desired Experience:

Experience with mobile development and in cache-aware algorithms will be highly valued

What You'll Actually Do:

Optimize inference stacks tailored to each platform as we prepare to deploy our models across various edge device types, including CPUs, embedded GPUs, and NPUs

Take our models, dive deep into the task, and return with a highly optimized inference stack, leveraging existing frameworks like llama.cpp, Executorch, and TensorRT to deliver exceptional throughput and low latency

What You'll Gain:

Hands-on experience with state-of-the-art technology at a leading AI company

A collaborative, fast-paced environment where your work directly shapes our products and the next generation of LFMs

About Liquid AI

Spun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.

#J-18808-Ljbffr