Pony.ai
Software Engineer, Perception (Robotics)
This range is provided by Pony.ai. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more. Base pay range
$120,000.00/yr - $200,000.00/yr About Us Founded in 2016 in Silicon Valley, Pony.ai has quickly become a global leader in autonomous mobility and is a pioneer in extending autonomous mobility technologies and services at a rapidly expanding footprint of sites around the world. Operating Robotaxi, Robotruck and Personally Owned Vehicles (POV) business units, Pony.ai is an industry leader in the commercialization of autonomous driving and is committed to developing the safest autonomous driving capabilities on a global scale. About The Role As part of the Perception team, you will help design and build the sensor data pipeline that powers our self-driving vehicles. Our team is responsible for turning raw sensor signals into reliable, real-time information that enables advanced perception models. You’ll work across multiple sensing modalities — cameras, lidars, radars, IMUs, microphones, and more — and help ensure that our autonomous driving system can perceive the world with accuracy and robustness. Responsibilities Work on algorithms, tools, and models that extract critical information from multi-modal sensors in real time. Develop and validate systems that ensure sensor data is accurate, synchronized, and reliable, including calibration, error detection, and health monitoring. Integrate sensor data into the perception stack and build efficient data flows that power real-time algorithms. Preprocess multi-sensor inputs to improve perception performance, such as time synchronization and ground detection. Contribute to the overall perception pipeline, from raw sensor integration to AI-ready features. Requirements Bachelor’s, Master’s, or PhD degree in Computer Science, Robotics, Computer Vision, or related fields. Solid programming skills in C++ and/or Python. Strong problem-solving and debugging skills, with exposure to real-time or systems-level software a plus. Familiarity with one or more areas: robotics, computer vision, signal processing, or deep learning. Excellent communication skills and ability to work in a collaborative, fast-paced environment. Compensation and Benefits Base Salary Range: $120,000 - $200,000 Annually Compensation may vary outside of this range depending on many factors, including the candidate’s qualifications, skills, competencies, experience, and location. Retirement Plan (Traditional and Roth 401k) Life Insurance (Basic, Voluntary & AD&D) Paid Time Off (Vacation & Public Holidays) Family Leave (Maternity, Paternity) Short Term & Long Term Disability Free Food & Snacks Medical insurance Vision insurance
#J-18808-Ljbffr
This range is provided by Pony.ai. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more. Base pay range
$120,000.00/yr - $200,000.00/yr About Us Founded in 2016 in Silicon Valley, Pony.ai has quickly become a global leader in autonomous mobility and is a pioneer in extending autonomous mobility technologies and services at a rapidly expanding footprint of sites around the world. Operating Robotaxi, Robotruck and Personally Owned Vehicles (POV) business units, Pony.ai is an industry leader in the commercialization of autonomous driving and is committed to developing the safest autonomous driving capabilities on a global scale. About The Role As part of the Perception team, you will help design and build the sensor data pipeline that powers our self-driving vehicles. Our team is responsible for turning raw sensor signals into reliable, real-time information that enables advanced perception models. You’ll work across multiple sensing modalities — cameras, lidars, radars, IMUs, microphones, and more — and help ensure that our autonomous driving system can perceive the world with accuracy and robustness. Responsibilities Work on algorithms, tools, and models that extract critical information from multi-modal sensors in real time. Develop and validate systems that ensure sensor data is accurate, synchronized, and reliable, including calibration, error detection, and health monitoring. Integrate sensor data into the perception stack and build efficient data flows that power real-time algorithms. Preprocess multi-sensor inputs to improve perception performance, such as time synchronization and ground detection. Contribute to the overall perception pipeline, from raw sensor integration to AI-ready features. Requirements Bachelor’s, Master’s, or PhD degree in Computer Science, Robotics, Computer Vision, or related fields. Solid programming skills in C++ and/or Python. Strong problem-solving and debugging skills, with exposure to real-time or systems-level software a plus. Familiarity with one or more areas: robotics, computer vision, signal processing, or deep learning. Excellent communication skills and ability to work in a collaborative, fast-paced environment. Compensation and Benefits Base Salary Range: $120,000 - $200,000 Annually Compensation may vary outside of this range depending on many factors, including the candidate’s qualifications, skills, competencies, experience, and location. Retirement Plan (Traditional and Roth 401k) Life Insurance (Basic, Voluntary & AD&D) Paid Time Off (Vacation & Public Holidays) Family Leave (Maternity, Paternity) Short Term & Long Term Disability Free Food & Snacks Medical insurance Vision insurance
#J-18808-Ljbffr