Maxinsights
Robotics Algorithm Engineer
We are looking for a Robotics Algorithm Engineer passionate about building next‑generation robot perception and mapping systems. You will work on vision‑based localization, 3D reconstruction, and spatial understanding, enabling robots to perceive, navigate, and interact intelligently with the real world.
Overview You will join a world‑class team of roboticists, computer vision scientists, and AI researchers developing embodied AI datasets, algorithms, and systems that bridge the gap between human and robot perception.
Responsibilities
Design and implement visual SLAM, multi‑sensor fusion, and structure‑from‑motion (SfM) pipelines for mobile robots or human‑centric perception rigs.
Develop algorithms for 3D reconstruction, dense mapping, and scene understanding from multi‑view or egocentric visual data.
Integrate and optimize perception algorithms on real robotic platforms (mobile bases, manipulators, or human motion rigs).
Conduct research and prototyping in areas such as NeRF‑based mapping, learning‑based SLAM, pose graph optimization, or multi‑view geometry.
Collaborate with the data and platform teams to improve dataset quality, sensor calibration, and large‑scale evaluation pipelines.
Qualifications
MS or PhD in Robotics, Computer Vision, Machine Learning, or a related field.
Strong experience with visual SLAM, VO/VIO, 3D reconstruction, or multi‑view geometry.
Proficiency in C++ and Python, and familiarity with ROS / ROS2.
Solid understanding of camera models, sensor fusion (IMU, LiDAR, depth), and optimization frameworks (Ceres, GTSAM, g2o).
Experience with datasets such as KITTI, TUM, EuRoC, Replica, or similar.
Strong analytical and debugging skills, capable of bringing research ideas to working systems.
Preferred / Bonus Skills
Experience in autonomous driving, robotics research, or embodied AI datasets.
Familiarity with learning‑based SLAM or differentiable mapping networks.
Hands‑on experience with multi‑camera calibration, stereo depth, or 3D reconstruction from RGBD / stereo / NeRF.
Publications in top venues (ICRA, CVPR, CoRL, RSS, ICCV, NeurIPS, etc.) are a plus.
Exposure to GPU acceleration, simulation environments (MuJoCo, Isaac, Genesis, etc.), or robot hardware integration.
Job Details
Seniority level: Mid‑Senior level
Employment type: Full‑time
Job function: Engineering and Information Technology
#J-18808-Ljbffr
Overview You will join a world‑class team of roboticists, computer vision scientists, and AI researchers developing embodied AI datasets, algorithms, and systems that bridge the gap between human and robot perception.
Responsibilities
Design and implement visual SLAM, multi‑sensor fusion, and structure‑from‑motion (SfM) pipelines for mobile robots or human‑centric perception rigs.
Develop algorithms for 3D reconstruction, dense mapping, and scene understanding from multi‑view or egocentric visual data.
Integrate and optimize perception algorithms on real robotic platforms (mobile bases, manipulators, or human motion rigs).
Conduct research and prototyping in areas such as NeRF‑based mapping, learning‑based SLAM, pose graph optimization, or multi‑view geometry.
Collaborate with the data and platform teams to improve dataset quality, sensor calibration, and large‑scale evaluation pipelines.
Qualifications
MS or PhD in Robotics, Computer Vision, Machine Learning, or a related field.
Strong experience with visual SLAM, VO/VIO, 3D reconstruction, or multi‑view geometry.
Proficiency in C++ and Python, and familiarity with ROS / ROS2.
Solid understanding of camera models, sensor fusion (IMU, LiDAR, depth), and optimization frameworks (Ceres, GTSAM, g2o).
Experience with datasets such as KITTI, TUM, EuRoC, Replica, or similar.
Strong analytical and debugging skills, capable of bringing research ideas to working systems.
Preferred / Bonus Skills
Experience in autonomous driving, robotics research, or embodied AI datasets.
Familiarity with learning‑based SLAM or differentiable mapping networks.
Hands‑on experience with multi‑camera calibration, stereo depth, or 3D reconstruction from RGBD / stereo / NeRF.
Publications in top venues (ICRA, CVPR, CoRL, RSS, ICCV, NeurIPS, etc.) are a plus.
Exposure to GPU acceleration, simulation environments (MuJoCo, Isaac, Genesis, etc.), or robot hardware integration.
Job Details
Seniority level: Mid‑Senior level
Employment type: Full‑time
Job function: Engineering and Information Technology
#J-18808-Ljbffr