Logo
Vecna Robotics

Linux Platform Engineer (Embedded + Machine Learning Deployment)

Vecna Robotics, Waltham, Massachusetts, United States, 02254

Save Job

Linux Platform Engineer (Embedded + Machine Learning Deployment) Join to apply for the

Linux Platform Engineer (Embedded + Machine Learning Deployment)

role at

Vecna Robotics .

Reporting to the Vice President, Autonomy Software, this is a full-time, salaried role based at our Waltham, MA headquarters. Applicants must be authorized to work for any employer in the U.S. We are unable to sponsor or take over sponsorship of an employment Visa at this time.

About Vecna Robotics Vecna Robotics is an intelligent flexible material handling automation company that keeps goods moving. With award-winning technology engineered for uninterrupted work between autonomous mobile robots, labor, and systems, we make business go. As a company, we are driven by the same collective vision: an uninterrupted and highly efficient global supply chain where robots do the dirty work and people do the human work.

The Senior Linux Platform Engineer Will Be Responsible For

Embedded Linux Engineering

Develop, configure, and maintain custom Linux distributions (e.g., Yocto, Buildroot) for embedded devices.

Optimize kernel, drivers, and system services for performance, footprint, and reliability.

Implement secure boot, OTA updates, and system monitoring solutions.

Platform & Infrastructure

Automate build, packaging, and deployment pipelines for embedded Linux platforms.

Manage system performance tuning, hardware bring‑up, and debugging at the OS and device-driver level.

Ensure compliance with real-time, safety, and security requirements as needed.

Machine Learning Model Deployment

Package, optimize, and deploy ML models onto resource-constrained devices using toolchains such as TensorRT, ONNX Runtime, or OpenVINO.

Integrate ML inference pipelines into embedded applications (C++/Python).

Profile model performance, memory usage, and latency; tune for edge hardware accelerators (GPU, TPU, NPU, FPGA).

Collaborate with ML teams to bridge the gap between training environments (cloud) and inference environments (embedded edge).

Familiarity with model optimization including quantization and pruning.

Familiarity with CUDA.

Collaboration & Support

Work closely with software, robotics, and AI teams to align platform capabilities with product requirements.

Write and maintain documentation, deployment guidelines, and troubleshooting playbooks.

Provide technical mentorship in embedded Linux best practices and ML deployment workflows.

What We Are Looking For

Bachelor’s or Master’s degree in Computer Science, Computer Engineering.

5–7 years of hands‑on experience in embedded Linux systems engineering, including kernel, driver, and system-level development.

Strong experience with Embedded Linux development (kernel, device drivers, system bring‑up).

Proficiency with Linux build systems (Yocto, Buildroot, CMake).

Strong programming skills in C/C++.

Experience deploying ML models on embedded hardware (e.g., NVIDIA Jetson, ARM Cortex-A/M, Qualcomm Snapdragon, Intel Movidius).

Familiarity with ML model formats and optimization tools (ONNX, TensorRT, TFLite, OpenVINO).

Hands‑on experience with CI/CD pipelines and containerization (Docker, Podman).

Knowledge of cross‑compilation, hardware debugging (JTAG, gdb), and performance profiling.

Strong written and verbal communication skills.

Preferred Skills:

Experience with ROS/ROS2 in robotics applications.

Familiarity with hardware accelerators and edge AI SDKs.

Understanding of real-time Linux (PREEMPT-RT).

Exposure to cybersecurity in embedded/edge devices.

Previous experience with production deployment of ML models on edge devices.

Base pay range: $126,000.00/yr – $160,000.00/yr.

We are an equal opportunity employer. We encourage and celebrate diversity.

Seniority level: Mid‑Senior level Employment type: Full‑time Job function: Engineering and Information Technology

#J-18808-Ljbffr