Mutable Tactics Ltd.
ML Engineer (perception & state estimation)
Mutable Tactics Ltd., Cambridge, Massachusetts, us, 02140
Department
Perception
Job Type
Full-time
Location
Cambridge
Modality
Hybrid
Start Date
No access
We are looking for a Machine Learning Engineer to join our Perception Team . You will build the core perception and reasoning engine for our flagship multi-agent system. This role is responsible for architecting the software that transforms raw, noisy sensor data into a rich, symbolic world model. This team will develop and implement the algorithms for managing perception inputs and maintaining a knowledge manager based on such inputs.
Who we are
About the Role
You'll form part of the Perception Team . This team unlocks the mastermind’s understanding and reasoning about its environment. This is an on-site position, the successful candidate will be expected to work from the office at least 3 days a week. What you’ll get to do
Multi-Sensor Fusion: Design and implement algorithms that manage the fusion of heterogeneous sensor streams (e.g., EO/IR, LiDAR, and neuromorphic cameras) into a single, coherent picture of the world. Object Recognition: Build and deploy models for real-time object detection, classification, and tracking, transforming raw data into structured, classified objects with unique IDs and states. World Modeling: Develop the Knowledge Manager , the central repository for abstract and symbolic world knowledge. You will be responsible for inferring the logical relationships between objects and agents. Probabilistic State Estimation: Implement and maintain the belief state over the environment, a core component of a knowledge manager. Goal Inference: Create the logic that translates high-level user commands into the formal, predicate-based goal states. API Collaboration: Work closely with the Systems and Behaviour teams to define and refine APIs. What we’d like to see
A strong theoretical foundation and practical experience in probabilistic machine learning (e.g., Bayesian inference, Gaussian Processes, state estimation filters like EKFs/UKFs). Demonstrable experience with modern ML frameworks (PyTorch preferred) and computer vision libraries (OpenCV) applied to real-world sensor data. Hands-on experience with sensor fusion techniques for combining data from sources like cameras and LiDAR. Production-quality coding skills in both Python and C++. What will set you apart
Proven experience developing and deploying software for real-world robotic systems (e.g., UAVs, UGVs) Deep expertise in sensor fusion techniques, particularly with state estimation filters like EKF, for tracking and localization. Hands-on experience with the Robot Operating System (ROS 2) and an understanding of the underlying DDS middleware and its QoS settings. Practical experience in multi-agent reinforcement learning (MARL), planning under uncertainty, or collaborative robotics. Familiarity with high-fidelity simulation environments for robotics, especially NVIDIA Isaac Lab. Familiarity with the challenges of real-time systems, including managing latency, ensuring deterministic timing (e.g., PTP), and maintaining performance on degraded communication links. Experience with knowledge representation, logical inference, or symbolic reasoning systems.
#J-18808-Ljbffr
About the Role
You'll form part of the Perception Team . This team unlocks the mastermind’s understanding and reasoning about its environment. This is an on-site position, the successful candidate will be expected to work from the office at least 3 days a week. What you’ll get to do
Multi-Sensor Fusion: Design and implement algorithms that manage the fusion of heterogeneous sensor streams (e.g., EO/IR, LiDAR, and neuromorphic cameras) into a single, coherent picture of the world. Object Recognition: Build and deploy models for real-time object detection, classification, and tracking, transforming raw data into structured, classified objects with unique IDs and states. World Modeling: Develop the Knowledge Manager , the central repository for abstract and symbolic world knowledge. You will be responsible for inferring the logical relationships between objects and agents. Probabilistic State Estimation: Implement and maintain the belief state over the environment, a core component of a knowledge manager. Goal Inference: Create the logic that translates high-level user commands into the formal, predicate-based goal states. API Collaboration: Work closely with the Systems and Behaviour teams to define and refine APIs. What we’d like to see
A strong theoretical foundation and practical experience in probabilistic machine learning (e.g., Bayesian inference, Gaussian Processes, state estimation filters like EKFs/UKFs). Demonstrable experience with modern ML frameworks (PyTorch preferred) and computer vision libraries (OpenCV) applied to real-world sensor data. Hands-on experience with sensor fusion techniques for combining data from sources like cameras and LiDAR. Production-quality coding skills in both Python and C++. What will set you apart
Proven experience developing and deploying software for real-world robotic systems (e.g., UAVs, UGVs) Deep expertise in sensor fusion techniques, particularly with state estimation filters like EKF, for tracking and localization. Hands-on experience with the Robot Operating System (ROS 2) and an understanding of the underlying DDS middleware and its QoS settings. Practical experience in multi-agent reinforcement learning (MARL), planning under uncertainty, or collaborative robotics. Familiarity with high-fidelity simulation environments for robotics, especially NVIDIA Isaac Lab. Familiarity with the challenges of real-time systems, including managing latency, ensuring deterministic timing (e.g., PTP), and maintaining performance on degraded communication links. Experience with knowledge representation, logical inference, or symbolic reasoning systems.
#J-18808-Ljbffr