Foundation
Responsibilities
Innovate vision and control systems: Design and implement machine-learning pipelines for perception and control—applying modern pre-trained models and fine-tuning for robotics. Advance sim-to-real transfer Integrate ML with control loops: Build feedback-aware perception modules that work within real-time control and embedded pipelines. Optimize for edge compute: Develop models and algorithms optimized for deployment on constrained hardware (robot SOCs, embedded GPUs). Deep hardware engagement: Interface ML pipelines in sensor–actor loops; responsible for lab setup, characterization, and performance tuning. Collaborate on system design: Work cross-functionally—safety, perception, and motion planning teams—to architect control strategies for high-DoF robots. Publish, mentor, learn: Contribute to technical documentation, present findings internally, and guide junior engineers. What Kind of Person We Are Looking For
Strong ML background: Proven experience with modern ML architectures—LVMs, transformers, reinforcement learning, CNNs, …; hands-on work with pre-trained models and fine-tuning for real-world applications. Simulation & robotics stack: Deep familiarity with ROS2, IsaacSim, MuJoCo; able to orchestrate end-to-end pipelines. Python + frameworks: Expert-level Python; hands-on experience with PyTorch, JAX, plus proficiency in C++ / embedded languages. Sim-to-real expertise: Demonstrated ability to bridge simulated prototypes to real-world systems. Hardware-first mindset: Experience integrating ML models with sensors, cameras, motors, and embedded compute; comfortable tweaking hardware and firmware. Edge deployment skills: Experience optimizing models and deploying on edge devices. Educational & experiential basis: MSc or PhD in Robotics, Computer Science, Controls, or equivalent. 5+ years in robotics / ML, with at least 3 years in vision / control applications. Nice-to-Have
Experience with active perception and vision–guided manipulation pipelines. Background in realtime operating systems (RTOS) or embedded Linux. Publications or open-source contributions in robotics or ML. Familiarity with multi-agent or human-robot interaction systems. Benefits
We provide market standard benefits (health, vision, dental, 401k, etc.). Join us for the culture and the mission, not for the benefits. The annual compensation is expected to be between $100,000 - $1,000,000. Exact compensation may vary based on skills, experience, and location.
#J-18808-Ljbffr
Innovate vision and control systems: Design and implement machine-learning pipelines for perception and control—applying modern pre-trained models and fine-tuning for robotics. Advance sim-to-real transfer Integrate ML with control loops: Build feedback-aware perception modules that work within real-time control and embedded pipelines. Optimize for edge compute: Develop models and algorithms optimized for deployment on constrained hardware (robot SOCs, embedded GPUs). Deep hardware engagement: Interface ML pipelines in sensor–actor loops; responsible for lab setup, characterization, and performance tuning. Collaborate on system design: Work cross-functionally—safety, perception, and motion planning teams—to architect control strategies for high-DoF robots. Publish, mentor, learn: Contribute to technical documentation, present findings internally, and guide junior engineers. What Kind of Person We Are Looking For
Strong ML background: Proven experience with modern ML architectures—LVMs, transformers, reinforcement learning, CNNs, …; hands-on work with pre-trained models and fine-tuning for real-world applications. Simulation & robotics stack: Deep familiarity with ROS2, IsaacSim, MuJoCo; able to orchestrate end-to-end pipelines. Python + frameworks: Expert-level Python; hands-on experience with PyTorch, JAX, plus proficiency in C++ / embedded languages. Sim-to-real expertise: Demonstrated ability to bridge simulated prototypes to real-world systems. Hardware-first mindset: Experience integrating ML models with sensors, cameras, motors, and embedded compute; comfortable tweaking hardware and firmware. Edge deployment skills: Experience optimizing models and deploying on edge devices. Educational & experiential basis: MSc or PhD in Robotics, Computer Science, Controls, or equivalent. 5+ years in robotics / ML, with at least 3 years in vision / control applications. Nice-to-Have
Experience with active perception and vision–guided manipulation pipelines. Background in realtime operating systems (RTOS) or embedded Linux. Publications or open-source contributions in robotics or ML. Familiarity with multi-agent or human-robot interaction systems. Benefits
We provide market standard benefits (health, vision, dental, 401k, etc.). Join us for the culture and the mission, not for the benefits. The annual compensation is expected to be between $100,000 - $1,000,000. Exact compensation may vary based on skills, experience, and location.
#J-18808-Ljbffr