Logo
DROPZONE ROBOTICS

Software Engineer (AI-Enabled Perception Systems)

DROPZONE ROBOTICS, Los Angeles, California, United States, 90079

Save Job

Overview

We're looking for a hands-on AI and Embedded Software Engineer to help integrate real-time video perception capabilities for wearable and unmanned ground vehicle (UGV) edge systems designed for emergency medical scene awareness. You'll collaborate with a small, fast-moving team that includes experts in robotics, embedded systems, and tactical medicine. Your work will bridge AI model development, integration, embedded software development, and system-level optimization to help create a field-deployable system that works reliably in resource-constrained real-world environments with the potential to save lives. Responsibilities Develop and maintain real-time

perception processing pipelines

for video and multimodal sensor data (RGB, thermal, IMU, depth) Integrate, refine, and deploy standard AI/ML models (e.g., object detection, pose estimation, activity recognition) on embedded edge devices (e.g., NVIDIA Jetson, ARM SoCs). Implement

multi-object tracking

and

object re-identification

across frames and scenes Develop embedded software modules in C++/Python for low-latency sensor interfacing and synchronization. Design

structured logging

and

lightweight data storage

for detected entities/events (JSON, SQLite, ROS bag). Build tools to visualize, test, and debug perception outputs under varying lighting, motion, and occlusion conditions Profile and

optimize runtime performance

(e.g., TensorRT, ONNX Runtime, CUDA, multithreading). Build testing and

visualization tools

to validate robustness under varying conditions (lighting, occlusion, motion). Contribute to

system integration and reliability

in field-deployable, power- and compute-limited environments.

Requirements

Proficiency in

Python

and

C++

(for embedded and performance-critical code). Hands-on experience with

PyTorch

or

ONNX -based AI models. Strong foundation in

computer vision

(OpenCV, video processing, tracking). Experience working in

Linux-based embedded environments . Familiarity with

Git, command line tools, and build systems

(CMake, Make, CI/CD). Ability to debug, profile, and optimize software for real-time performance. Strong

attention to detail

and

curiosity

in learning new tools and workflows

Nice to Have

Experience with

NVIDIA Jetson ,

ARM , or other edge AI platforms. Knowledge of

TensorRT ,

CUDA , or hardware acceleration libraries. Familiarity with

ROS2

or other robotics middleware. Exposure to

sensor hardware integration

(cameras, IMUs, stereo, thermal). Background in

real-time systems, robotics, or embedded AI applications . Experience with

Jetson ,

TensorRT , or embedded GPU devices Familiarity with standardized models such as

MediaPipe ,

MMPose , or

Ultralytics YOLOv8+ See your work deployed in real-world life-saving search-and-rescue response Collaborate with domain experts from NASA, DoD, and frontline medics Fast-paced environment with flexibility, mentorship, and growth opportunities

Seniority

Entry level

Employment type

Full-time

Job function

Engineering and Information Technology

#J-18808-Ljbffr