Logo
Volt, Inc.

Senior Applied AI Engineer (Multimodal Perception & Reasoning)

Volt, Inc., San Francisco, California, United States, 94199

Save Job

VOLT is building the next generation of AI perception systems for the physical world, focused on safety, security, and real-time risk detection. We are seeking a Senior Applied AI & Machine Learning Engineer to design, optimize, and ship multimodal AI models that operate reliably in real-world environments. This is a deeply applied role, centered on taking models from data to production—across both edge devices and cloud infrastructure. You will work on vision, video, and language-based models that understand real-world scenes and events, and you will be accountable for their accuracy, latency, robustness, and cost in production systems. This role reports directly to the Head of Engineering and plays a critical role in advancing VOLT AI’s core perception platform. Key Responsibilities

Build, fine-tune, and deploy production-grade multimodal models

for safety and security applications, with a focus on visual and video perception, language-assisted and multimodal reasoning, and temporal understanding of real-world environments Own the full applied ML lifecycle , including data collection, labeling strategies, and dataset curation, model fine-tuning, evaluation, and iteration, and deployment, monitoring, and continuous improvement in production Drive model performance in real-world conditions , optimizing for high precision and recall, low false positives and false negatives, and robustness to noise, lighting changes, occlusion, and domain shift Optimize models for edge and cloud deployment , including quantization, pruning, and model compression, latency, throughput, and memory optimization, and hardware-aware tuning for GPUs and edge accelerators Build and maintain training and inference pipelines

that support scalable experimentation and evaluation, reproducibility and model versioning, and reliable production deployment Collaborate closely with infrastructure and systems engineers

to integrate models into real-time perception pipelines, balance accuracy, performance, and cost constraints, and diagnose and resolve production inference issues Use real-world deployment feedback and metrics

to drive data and model improvements Required Qualifications

8+ years of experience in

applied machine learning or AI systems Strong hands-on experience with

vision, video, or multimodal models Proven experience taking models

into production , not just research prototypes Deep understanding of

model optimization

(quantization, pruning, performance tuning) Proficiency in Python and modern ML frameworks (e.g., PyTorch) Experience evaluating models using real-world metrics and constraints Ability to operate independently and own complex technical systems end to end Preferred Qualifications

Experience with multimodal or vision-language models (CLIP-like, BLIP-like, or custom) Experience deploying models to

edge or resource-constrained environments Familiarity with inference optimization stacks (ONNX, TensorRT, CUDA) Experience working on physical-world perception systems (video, sensors, environments) Background in safety, security, robotics, or autonomous systems Experience mentoring senior engineers or providing technical leadership What Success Looks Like

Models ship reliably and improve measurable safety outcomes Precision and recall improve while inference cost and latency decrease Edge and cloud inference pipelines operate at production scale Data and model iteration loops accelerate over time AI perception becomes a durable competitive advantage for VOLT AI Compensation and Benefits

$175,000 - $220,000 a year At VOLT AI, you will build applied AI systems that run in the real world—on live video, in real environments, under real constraints. This role is for an engineer who wants to ship models, optimize them aggressively, and see their impact in production, not publish papers.

#J-18808-Ljbffr