FieldAI
3.23 Robotics State Estimation & Localization Engineer
FieldAI, Irvine, California, United States, 92713
Field AI is transforming how robots interact with the real world. We are building risk-aware, reliable, and field-ready AI systems that address the most complex challenges in robotics, unlocking the full potential of embodied intelligence. We go beyond typical data-driven approaches or pure transformer-based architectures, and are charting a new course, with already-globally-deployed solutions delivering real-world results and rapidly improving models through real-field applications.
About the Job We’re building the estimation and navigation stack that keeps our
legged and humanoid robots
balanced, aware, and mission-ready—indoors and out, with or without GPS. You’ll design and ship real-time estimators and fusion pipelines that combine
IMU
and
GNSS/GPS/RTK
with legged-robot proprioception (joint encoders, torque/force & foot-contact sensors) and exteroception (cameras,
LiDAR , radar/ UWB ). You’ll take algorithms from log-replay to rugged field performance on embedded/Linux targets, partnering closely with controls, perception, and planning.
Field AI is transforming how robots interact with the real world. We are building risk-aware, reliable, and field-ready AI systems that address the most complex challenges in robotics, unlocking the full potential of embodied intelligence. We go beyond typical data-driven approaches or pure transformer-based architectures, and are charting a new course, with already-globally-deployed solutions delivering real-world results and rapidly improving models through real-field applications.
What You’ll Get To Do
State Estimation for Legged/Humanoid Bases
Design and tune
EKF/UKF
error-state filters for floating-base pose/velocity,
COM , IMU biases, and contact states
Fuse IMU, joint encoders, foot
F/T
& contact sensors; implement
ZUPT/ZARU , slip handling, and kinematic/dynamic constraints
Expose clean interfaces (frames/timestamps/covariances) to whole-body control and footstep planning
Perception-Aided Localization & Mapping
Stand up
VIO/LIO
pipelines (stereo/ RGB-D
+ LiDAR) for GPS-denied operation, with map-based relocalization and loop closure
Add global aids— GNSS/RTK ,
UWB
beacons, prior maps—and blend filtering with factor-graph smoothing when advantageous
Manage drift/consistency with robust outlier rejection, gating, and integrity monitoring
Calibration, Timing & Robustness
Own time sync ( PTP /Chrony/hardware triggers) and multi-sensor calibration (Allan variance for IMU, camera-IMU/LiDAR-IMU/base extrinsics, encoder offsets)
Build health monitoring,
FDIR , and graceful-degradation behaviors for harsh terrain and intermittent sensors
Establish KPIs ( ATE/RTE ,
NEES/NIS , availability) and automated regression tests
Tooling, Simulation & Field Operations
Create log-replay pipelines, datasets, and dashboards for rapid iteration and performance tracking
Validate in sim (Gazebo, Isaac etc) and in the field (stairs, rubble, ramps, slippery floors)
Optimize and deploy on embedded targets (Jetson/x86), profiling latency, memory, and numerical stability
What You Have
Strong fundamentals in estimation & sensor fusion ( EKF/UKF , error-state, observability/consistency, covariance tuning)
Hands-on with
IMUs
(strapdown mechanization, bias/scale, coning/sculling) and
GNSS/GPS/RTK
(loosely vs tightly coupled
INS )
Experience with legged-robot proprioception: joint encoders, foot contact/pressure, torque/force sensors; using kinematic/dynamic constraints in estimators
Proficiency in
modern C++ (14/17/20)
on
Linux ;
Python
for tooling, analysis, and log processing
Comfort with
SO(3)/SE(3) , Lie-group math, and non-linear optimization
Integration with at least two of: cameras ( VIO ),
LiDAR
( LIO /scan-matching),
UWB , magnetometer/barometer, radar
Familiarity with
ROS 1/ROS 2 , CMake/Bazel, Docker,
CI/CD , and reproducible experiments
Proven track record shipping research-to-production algorithms on real robots with field test cycles
BS/MS/PhD in Robotics/EE/CS/AE or equivalent practical experience
The Extras That Set You Apart
Factor-graph SLAM/VIO (GTSAM/iSAM2) and non-linear solvers (Ceres/g2o); hybrid filtering + smoothing in production
Whole-body/legged tooling, momentum/ COM
filters, terrain estimation, and contact-rich datasets
Robustness techniques: adaptive noise models, M-estimators/gating, data association, map-based relocalization
Experience with
GPS-denied
navigation at scale (warehouses, construction, urban canyons)
Real-time/performance chops ( RT-PREEMPT , lock-free pipelines, deterministic logging, on-robot telemetry)
Embedded/GPU acceleration ( Nvidia / CUDA ) for perception-aided estimation
Designing calibration & end-of-line test procedures for production
Compensation and Benefits
Our salary range is generous ($70,000 - $200,000 annual), but we take into consideration an individual's background and experience in determining final salary; base pay offered may vary considerably depending on geographic location, job-related knowledge, skills, and experience. Also, while we enjoy being together on-site, we are open to exploring a hybrid or remote option.
Why Join Field AI? We are solving one of the world’s most complex challenges: deploying robots in unstructured, previously unknown environments. Our Field Foundational Models set a new standard in perception, planning, localization, and manipulation, ensuring our approach is explainable and safe for deployment.
You will have the opportunity to work with a world-class team that thrives on creativity, resilience, and bold thinking. With a decade-long track record of deploying solutions in the field, winning DARPA challenge segments, and bringing expertise from organizations like DeepMind, NASA JPL, Boston Dynamics, NVIDIA, Amazon, Tesla Autopilot, Cruise Self-Driving, Zoox, Toyota Research Institute, and SpaceX, we are set to achieve our ambitious goals.
Be Part of the Next Robotics Revolution
To tackle such ambitious challenges, we need a team as unique as our vision — innovators who go beyond conventional methods and are eager to tackle tough, uncharted questions. Our team requires not only top AI talent but also exceptional software developers, engineers, product designers, field deployment experts, and communicators.
We are headquartered in always-sunny Mission Viejo (Irvine adjacent), Southern California and have US based and global teammates.
Join us, shape the future, and be part of a fun, close-knit team on an exciting journey!
We celebrate diversity and are committed to creating an inclusive environment for all employees. Candidates and employees are always evaluated based on merit, qualifications, and performance. We will never discriminate on the basis of race, color, gender, national origin, ethnicity, veteran status, disability status, age, sexual orientation, gender identity, martial status, mental or physical disability, or any other legally protected status.
Seniority level Not Applicable
Employment type Full-time
Job function Information Technology
Referrals increase your chances of interviewing at FieldAI by 2x
Get notified about new Localization Engineer jobs in Irvine, CA.
Irvine, CA $105,000.00-$116,000.00 1 month ago
Other Principal Software Engineer, Localization — We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-Ljbffr
About the Job We’re building the estimation and navigation stack that keeps our
legged and humanoid robots
balanced, aware, and mission-ready—indoors and out, with or without GPS. You’ll design and ship real-time estimators and fusion pipelines that combine
IMU
and
GNSS/GPS/RTK
with legged-robot proprioception (joint encoders, torque/force & foot-contact sensors) and exteroception (cameras,
LiDAR , radar/ UWB ). You’ll take algorithms from log-replay to rugged field performance on embedded/Linux targets, partnering closely with controls, perception, and planning.
Field AI is transforming how robots interact with the real world. We are building risk-aware, reliable, and field-ready AI systems that address the most complex challenges in robotics, unlocking the full potential of embodied intelligence. We go beyond typical data-driven approaches or pure transformer-based architectures, and are charting a new course, with already-globally-deployed solutions delivering real-world results and rapidly improving models through real-field applications.
What You’ll Get To Do
State Estimation for Legged/Humanoid Bases
Design and tune
EKF/UKF
error-state filters for floating-base pose/velocity,
COM , IMU biases, and contact states
Fuse IMU, joint encoders, foot
F/T
& contact sensors; implement
ZUPT/ZARU , slip handling, and kinematic/dynamic constraints
Expose clean interfaces (frames/timestamps/covariances) to whole-body control and footstep planning
Perception-Aided Localization & Mapping
Stand up
VIO/LIO
pipelines (stereo/ RGB-D
+ LiDAR) for GPS-denied operation, with map-based relocalization and loop closure
Add global aids— GNSS/RTK ,
UWB
beacons, prior maps—and blend filtering with factor-graph smoothing when advantageous
Manage drift/consistency with robust outlier rejection, gating, and integrity monitoring
Calibration, Timing & Robustness
Own time sync ( PTP /Chrony/hardware triggers) and multi-sensor calibration (Allan variance for IMU, camera-IMU/LiDAR-IMU/base extrinsics, encoder offsets)
Build health monitoring,
FDIR , and graceful-degradation behaviors for harsh terrain and intermittent sensors
Establish KPIs ( ATE/RTE ,
NEES/NIS , availability) and automated regression tests
Tooling, Simulation & Field Operations
Create log-replay pipelines, datasets, and dashboards for rapid iteration and performance tracking
Validate in sim (Gazebo, Isaac etc) and in the field (stairs, rubble, ramps, slippery floors)
Optimize and deploy on embedded targets (Jetson/x86), profiling latency, memory, and numerical stability
What You Have
Strong fundamentals in estimation & sensor fusion ( EKF/UKF , error-state, observability/consistency, covariance tuning)
Hands-on with
IMUs
(strapdown mechanization, bias/scale, coning/sculling) and
GNSS/GPS/RTK
(loosely vs tightly coupled
INS )
Experience with legged-robot proprioception: joint encoders, foot contact/pressure, torque/force sensors; using kinematic/dynamic constraints in estimators
Proficiency in
modern C++ (14/17/20)
on
Linux ;
Python
for tooling, analysis, and log processing
Comfort with
SO(3)/SE(3) , Lie-group math, and non-linear optimization
Integration with at least two of: cameras ( VIO ),
LiDAR
( LIO /scan-matching),
UWB , magnetometer/barometer, radar
Familiarity with
ROS 1/ROS 2 , CMake/Bazel, Docker,
CI/CD , and reproducible experiments
Proven track record shipping research-to-production algorithms on real robots with field test cycles
BS/MS/PhD in Robotics/EE/CS/AE or equivalent practical experience
The Extras That Set You Apart
Factor-graph SLAM/VIO (GTSAM/iSAM2) and non-linear solvers (Ceres/g2o); hybrid filtering + smoothing in production
Whole-body/legged tooling, momentum/ COM
filters, terrain estimation, and contact-rich datasets
Robustness techniques: adaptive noise models, M-estimators/gating, data association, map-based relocalization
Experience with
GPS-denied
navigation at scale (warehouses, construction, urban canyons)
Real-time/performance chops ( RT-PREEMPT , lock-free pipelines, deterministic logging, on-robot telemetry)
Embedded/GPU acceleration ( Nvidia / CUDA ) for perception-aided estimation
Designing calibration & end-of-line test procedures for production
Compensation and Benefits
Our salary range is generous ($70,000 - $200,000 annual), but we take into consideration an individual's background and experience in determining final salary; base pay offered may vary considerably depending on geographic location, job-related knowledge, skills, and experience. Also, while we enjoy being together on-site, we are open to exploring a hybrid or remote option.
Why Join Field AI? We are solving one of the world’s most complex challenges: deploying robots in unstructured, previously unknown environments. Our Field Foundational Models set a new standard in perception, planning, localization, and manipulation, ensuring our approach is explainable and safe for deployment.
You will have the opportunity to work with a world-class team that thrives on creativity, resilience, and bold thinking. With a decade-long track record of deploying solutions in the field, winning DARPA challenge segments, and bringing expertise from organizations like DeepMind, NASA JPL, Boston Dynamics, NVIDIA, Amazon, Tesla Autopilot, Cruise Self-Driving, Zoox, Toyota Research Institute, and SpaceX, we are set to achieve our ambitious goals.
Be Part of the Next Robotics Revolution
To tackle such ambitious challenges, we need a team as unique as our vision — innovators who go beyond conventional methods and are eager to tackle tough, uncharted questions. Our team requires not only top AI talent but also exceptional software developers, engineers, product designers, field deployment experts, and communicators.
We are headquartered in always-sunny Mission Viejo (Irvine adjacent), Southern California and have US based and global teammates.
Join us, shape the future, and be part of a fun, close-knit team on an exciting journey!
We celebrate diversity and are committed to creating an inclusive environment for all employees. Candidates and employees are always evaluated based on merit, qualifications, and performance. We will never discriminate on the basis of race, color, gender, national origin, ethnicity, veteran status, disability status, age, sexual orientation, gender identity, martial status, mental or physical disability, or any other legally protected status.
Seniority level Not Applicable
Employment type Full-time
Job function Information Technology
Referrals increase your chances of interviewing at FieldAI by 2x
Get notified about new Localization Engineer jobs in Irvine, CA.
Irvine, CA $105,000.00-$116,000.00 1 month ago
Other Principal Software Engineer, Localization — We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-Ljbffr