Logo
Insight Recruitment

Machine Learning Operations Engineer

Insight Recruitment, Phoenix, Arizona, United States

Save Job

Get AI-powered advice on this job and more exclusive features. This range is provided by Insight Recruitment. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more. Base pay range

$140,000.00/yr - $180,000.00/yr Overview

ML Ops Engineer @ AI Robotics ScaleUp - Up to $180k + Equity Join a cutting-edge startup at the intersection of

AI, robotics, and national security

— developing

next-gen autonomous systems

that combine advanced machine learning with edge computing to enable coordinated, intelligent behavior across fleets of unmanned ground vehicles. The Opportunity We’re seeking an

MLOps Engineer

to help design, build, and scale the infrastructure that powers our autonomous swarm systems. You’ll play a key role in creating the ML backbone that supports our perception and decision-making models — ensuring fast, reliable, and secure deployment of intelligence across an expanding fleet of AI-driven vehicles. This is a rare opportunity to work hands-on with

robotics, deep learning, and real-world deployment

— building pipelines that don’t just run in the cloud, but

power machines in the field . What You’ll Do

Design and maintain

end-to-end ML pipelines

for model training, validation, and deployment Build scalable data systems to handle

massive sensor inputs

(cameras, LiDAR, IMU) from real-world operations Implement

model monitoring, A/B testing, and automated feedback loops

for continuous performance improvement Develop

CI/CD workflows

for model versioning, testing, and fleet deployment Architect

distributed computing solutions

for high-volume data processing and large-scale model training 2+ years of experience in

MLOps, DevOps, or ML infrastructure Experience with

pipeline orchestration tools

(e.g., Kubeflow, MLflow) Proficiency with

Docker, Kubernetes , and

cloud platforms

(AWS, GCP, or Azure) Skilled in

Python

and

Linux system administration Familiar with

model serving frameworks

(TensorRT, ONNX Runtime, TorchServe) Experience with

monitoring and logging

(Prometheus, Grafana, ELK stack) Strong communication and organization skills — thrives in a

fast-paced, collaborative startup

environment Security Clearable Willing to

relocate to the Phoenix, AZ area Why Join Us

Be part of a team building

mission-critical AI systems

that make a real-world impact Work alongside

pioneers in robotics and AI Tackle

complex, frontier-scale challenges

in distributed ML and autonomy Shape the future of how

intelligent machines collaborate and operate

in the field Responsibilities & Qualifications

2+ years of experience in

MLOps, DevOps, or ML infrastructure Proficiency with

Python

and

Linux

system administration Experience with

Docker, Kubernetes

and

cloud platforms

(AWS, GCP, Azure) Experience with

CI/CD

pipelines and

model versioning Employment details

Employment type:

Full-time Seniority level:

Mid-Senior level Job function:

Engineering and Information Technology Note: This posting excludes irrelevant boilerplate job board notices. The content above reflects the core role, responsibilities, and requirements without extraneous listings.

#J-18808-Ljbffr