Samsonrose
We are exclusively engaged with an outdoor robotics client to help find a Senior Computer Vision Engineer to drive the development of vision-based systems for their autonomous robots. This role focuses on sensor calibration, point cloud processing, and terrain analysis to enable precise environmental perception and navigation. You will work closely with cross-disciplinary team members to ensure the robots operate effectively in dynamic environments.
Key Responsibilities
Develop and optimize computer vision algorithms for sensor calibration (e.g., cameras, LiDAR, IMUs). Construct and process point clouds from multi-sensor data (cameras, LiDAR, IMUs etc). Perform terrain analysis for obstacle detection, ground surface modeling, and path planning. Implement and refine sensor fusion techniques for accurate environmental mapping. Design and implement algorithms for object recognition, segmentation, and tracking. Develop and maintain real-time vision pipelines to support autonomous navigation. Document algorithms, processes, and experimental results. Requirements:
7+ years in computer vision, image processing, or related fields, with experience in autonomous systems Proficiency in programming languages: C++, Python. Experience with point cloud processing and 3D reconstruction. Expertise in sensor calibration methods (intrinsic/extrinsic calibration of cameras, LiDAR, and IMUs). Strong knowledge of computer vision libraries (OpenCV, PCL etc). Understanding of machine learning models for vision tasks (classification, segmentation). Experience with development tools (Git, JIRA) and debugging tools for vision systems. Strong problem-solving abilities, attention to detail, and the ability to work collaboratively in a dynamic environment. Preferred Qualifications
Experience with terrain analysis and segmentation techniques. Familiarity with ROS (Robot Operating System) and real-time processing pipelines. Experience with SLAM (Simultaneous Localization and Mapping). Familiarity with deploying vision models on edge computing hardware. If this role is of interest to you, please apply for it with your current resume. We will reach out to schedule an initial call. #J-18808-Ljbffr
Key Responsibilities
Develop and optimize computer vision algorithms for sensor calibration (e.g., cameras, LiDAR, IMUs). Construct and process point clouds from multi-sensor data (cameras, LiDAR, IMUs etc). Perform terrain analysis for obstacle detection, ground surface modeling, and path planning. Implement and refine sensor fusion techniques for accurate environmental mapping. Design and implement algorithms for object recognition, segmentation, and tracking. Develop and maintain real-time vision pipelines to support autonomous navigation. Document algorithms, processes, and experimental results. Requirements:
7+ years in computer vision, image processing, or related fields, with experience in autonomous systems Proficiency in programming languages: C++, Python. Experience with point cloud processing and 3D reconstruction. Expertise in sensor calibration methods (intrinsic/extrinsic calibration of cameras, LiDAR, and IMUs). Strong knowledge of computer vision libraries (OpenCV, PCL etc). Understanding of machine learning models for vision tasks (classification, segmentation). Experience with development tools (Git, JIRA) and debugging tools for vision systems. Strong problem-solving abilities, attention to detail, and the ability to work collaboratively in a dynamic environment. Preferred Qualifications
Experience with terrain analysis and segmentation techniques. Familiarity with ROS (Robot Operating System) and real-time processing pipelines. Experience with SLAM (Simultaneous Localization and Mapping). Familiarity with deploying vision models on edge computing hardware. If this role is of interest to you, please apply for it with your current resume. We will reach out to schedule an initial call. #J-18808-Ljbffr