Overland AI
Infrastructure Software Engineer
Location: Seattle, WA (Hybrid: 3 days onsite)
Travel: In-state travel + 1–2 weeks out-of-state per year
Clearance: Ability to obtain and maintain a DoD Security Clearance
About Overland AI Founded in 2022 and headquartered in Seattle, Washington, Overland AI is transforming land operations for modern defense. The company leverages over a decade of advanced research in robotics and machine learning, as well as a field‑test forward ethos, to deliver combined capabilities for unit commanders. Our OverDrive autonomy stack enables ground vehicles to navigate and operate off‑road in any terrain without GPS or direct operator control. Our intuitive OverWatch C2 interface provides commanders with precise coordination capabilities essential for mission success.
Overland AI has secured funding from prominent defense tech investors, including 8VC and Point 72, and built trusted partnerships with DARPA, the U.S. Army, Marine Corps, and Special Operations Command. Backed by eight‑figure contracts across the Department of Defense, we are strengthening national security by iterating closely with end users engaged in tactical operations.
Role Summary Overland AI is building off‑road autonomy, and our software infrastructure is the backbone that enables our teams to move fast, experiment safely, and ship mission‑critical systems. As a Software Infrastructure Engineer, you’ll design, build, and maintain the internal software systems that power our AI/ML workflows, connect our robots, and streamline developer operations across the company.
You will own core services used daily by researchers, roboticists, and field teams — including internal APIs, experiment tooling, data systems, and compute orchestration. This role is ideal for an engineer who loves building robust systems, improving developer velocity, and shaping the foundation that our autonomy stack relies on.
You’ll collaborate closely with the infrastructure team, ML teams, and robotics teams to build reliable, observable, well‑structured systems in a hybrid cloud/on‑prem environment.
What You’ll Do
Design, build, and maintain internal services and tooling used across AI/ML, autonomy, and field operations teams
Develop both backend services and front‑end interfaces (web apps, dashboards, APIs) that enable experimentation, data access, and compute management
Build and support microservices as well as monolithic systems, including migrating legacy systems to modern service‑oriented architectures
Improve developer velocity and system reliability through better tooling, automation, observability, and CI/CD
Support hybrid deployments across AWS, on‑prem compute, and on‑robot environments
Troubleshoot across software, infrastructure, and networking layers to keep internal systems reliable and performant
Collaborate with ML engineers, roboticists, and internal stakeholders to define requirements and deliver high‑impact tooling
Maintain excellent documentation and support distributed teams (local + remote + field)
Required Qualifications
5+ years of experience in software engineering, infrastructure tooling, or full‑stack development
Proficiency with modern web frameworks and the ability to build both backend and front‑end systems
Experience building and maintaining both microservices and monolithic applications
Strong working knowledge of AWS, Linux systems, and networking fundamentals
Proficiency with containerized environments (Docker) and CI/CD (GitLab, GitHub Actions, or equivalent)
Experience with infrastructure‑as‑code (Terraform, Ansible, etc.)
Demonstrated ability to support multiple internal or customer‑facing applications simultaneously
Excellent troubleshooting skills across software, networking, and infrastructure layers
Strong communication and documentation skills
Nice to Have
Experience with Kubernetes or GitOps tools (ArgoCD, Flux, Spinnaker)
Familiarity with ML infrastructure, experiment tracking, or data visualization tools
Experience with hardware integration or embedded system interfaces
Prior experience supporting hybrid or on‑prem deployments
Experience migrating a monolith to a microservices‑based architecture
Additional Requirements
Ability to travel in‑state for testing, events, and onsite support
Ability to travel out‑of‑state 1–2 weeks per year for demos or field exercises
Ability to work onsite in the Seattle office at least 3 days per week
Ability to obtain and maintain a DoD Security Clearance
Competitive salary: $130K – $170K annually
Equity compensation
Best‑in‑class healthcare, dental, and vision plans
Unlimited PTO
401(k) with company match
Parental leave
This position involves access to technical data subject to U.S. export control laws (ITAR/EAR).
#J-18808-Ljbffr
Travel: In-state travel + 1–2 weeks out-of-state per year
Clearance: Ability to obtain and maintain a DoD Security Clearance
About Overland AI Founded in 2022 and headquartered in Seattle, Washington, Overland AI is transforming land operations for modern defense. The company leverages over a decade of advanced research in robotics and machine learning, as well as a field‑test forward ethos, to deliver combined capabilities for unit commanders. Our OverDrive autonomy stack enables ground vehicles to navigate and operate off‑road in any terrain without GPS or direct operator control. Our intuitive OverWatch C2 interface provides commanders with precise coordination capabilities essential for mission success.
Overland AI has secured funding from prominent defense tech investors, including 8VC and Point 72, and built trusted partnerships with DARPA, the U.S. Army, Marine Corps, and Special Operations Command. Backed by eight‑figure contracts across the Department of Defense, we are strengthening national security by iterating closely with end users engaged in tactical operations.
Role Summary Overland AI is building off‑road autonomy, and our software infrastructure is the backbone that enables our teams to move fast, experiment safely, and ship mission‑critical systems. As a Software Infrastructure Engineer, you’ll design, build, and maintain the internal software systems that power our AI/ML workflows, connect our robots, and streamline developer operations across the company.
You will own core services used daily by researchers, roboticists, and field teams — including internal APIs, experiment tooling, data systems, and compute orchestration. This role is ideal for an engineer who loves building robust systems, improving developer velocity, and shaping the foundation that our autonomy stack relies on.
You’ll collaborate closely with the infrastructure team, ML teams, and robotics teams to build reliable, observable, well‑structured systems in a hybrid cloud/on‑prem environment.
What You’ll Do
Design, build, and maintain internal services and tooling used across AI/ML, autonomy, and field operations teams
Develop both backend services and front‑end interfaces (web apps, dashboards, APIs) that enable experimentation, data access, and compute management
Build and support microservices as well as monolithic systems, including migrating legacy systems to modern service‑oriented architectures
Improve developer velocity and system reliability through better tooling, automation, observability, and CI/CD
Support hybrid deployments across AWS, on‑prem compute, and on‑robot environments
Troubleshoot across software, infrastructure, and networking layers to keep internal systems reliable and performant
Collaborate with ML engineers, roboticists, and internal stakeholders to define requirements and deliver high‑impact tooling
Maintain excellent documentation and support distributed teams (local + remote + field)
Required Qualifications
5+ years of experience in software engineering, infrastructure tooling, or full‑stack development
Proficiency with modern web frameworks and the ability to build both backend and front‑end systems
Experience building and maintaining both microservices and monolithic applications
Strong working knowledge of AWS, Linux systems, and networking fundamentals
Proficiency with containerized environments (Docker) and CI/CD (GitLab, GitHub Actions, or equivalent)
Experience with infrastructure‑as‑code (Terraform, Ansible, etc.)
Demonstrated ability to support multiple internal or customer‑facing applications simultaneously
Excellent troubleshooting skills across software, networking, and infrastructure layers
Strong communication and documentation skills
Nice to Have
Experience with Kubernetes or GitOps tools (ArgoCD, Flux, Spinnaker)
Familiarity with ML infrastructure, experiment tracking, or data visualization tools
Experience with hardware integration or embedded system interfaces
Prior experience supporting hybrid or on‑prem deployments
Experience migrating a monolith to a microservices‑based architecture
Additional Requirements
Ability to travel in‑state for testing, events, and onsite support
Ability to travel out‑of‑state 1–2 weeks per year for demos or field exercises
Ability to work onsite in the Seattle office at least 3 days per week
Ability to obtain and maintain a DoD Security Clearance
Competitive salary: $130K – $170K annually
Equity compensation
Best‑in‑class healthcare, dental, and vision plans
Unlimited PTO
401(k) with company match
Parental leave
This position involves access to technical data subject to U.S. export control laws (ITAR/EAR).
#J-18808-Ljbffr