Navier AI
Machine Learning Engineer - Perception
Navier AI, San Francisco, California, United States, 94102
Machine Learning Engineer - Perception
Navier AI is building the first autonomous engineering agents. AI systems that can design, simulate, and optimize complex products to achieve breakthrough levels of performance. Our mission is to enable engineers to move beyond today's slow, manual design cycles by providing agents that reason about physics, explore design trade-offs, and generate high-performance solutions across aerospace, automotive, and advanced manufacturing. Role Overview
We're looking for a Machine Learning Engineer with a background in Perceptionespecially from robotics, AR/VR, or autonomous vehiclesto help our agents make sense of geometry, scenes, and structure. You'll build the core ML models and representations that let agents interpret CAD files, simulation data, and 3D environments, bridging the gap between raw geometry and engineering decisions. Responsibilities
Develop perception models that extract structure, semantics, and key features from 3D geometry (CAD models, meshes, point clouds, simulation outputs) Create representations and embeddings that agents can use to reason about geometry, physics, and constraints Build multimodal pipelines that integrate visual, geometric, and language-based inputs to provide rich context for LLM-driven agents Work with simulation and CAD data to build datasets, labeling pipelines, and training workflows tailored to engineering contexts Collaborate closely with agent, frontend, and infra teams to deploy perception models and ensure fast, scalable inference Explore and implement techniques such as 3D transformers, geometric deep learning, diffusion models, and structured reasoning Help shape the research direction for context-aware AI agents operating in engineering domains Qualifications
Required: Experience in 3D perception or scene understanding, ideally from robotics, autonomous vehicles, AR/VR, or similar domains Strong ML engineering skills in Python and PyTorch (or equivalent frameworks) Familiarity with 3D data formats (e.g., meshes, point clouds, SDFs, occupancy grids, STL/STEP/CAD) Knowledge of neural networks for geometric data (e.g., PointNet++, 3D CNNs, graph neural networks, or NeRF-style models) Experience building and deploying ML models end-to-end, from data curation to evaluation and inference Comfort working in fast-paced, ambiguous environments and shipping without perfect specs Preferred: Background in mechanical, aerospace, or robotics engineering Experience integrating perception models with LLMs, planners, or task agents Prior work with engineering simulation data (e.g., CFD, FEA) or CAD tools Experience with multi-agent systems, reinforcement learning, or real-time perception Interest in pushing the boundaries of how AI understands the physical world Why This Role Matters
For AI agents to reason about physical systems, they need to see and understand geometry like an engineer. This role is foundational to giving our agents that capability. You'll help define how structure, spatial relationships, and physical constraints are represented and usedenabling AI to go beyond text and truly grasp the design space. What We Offer
Competitive compensation, including salary and equity Direct exposure to high-impact technical problems in aerospace, automotive, and advanced manufacturing Opportunity to help define a new category of engineering software
Navier AI is building the first autonomous engineering agents. AI systems that can design, simulate, and optimize complex products to achieve breakthrough levels of performance. Our mission is to enable engineers to move beyond today's slow, manual design cycles by providing agents that reason about physics, explore design trade-offs, and generate high-performance solutions across aerospace, automotive, and advanced manufacturing. Role Overview
We're looking for a Machine Learning Engineer with a background in Perceptionespecially from robotics, AR/VR, or autonomous vehiclesto help our agents make sense of geometry, scenes, and structure. You'll build the core ML models and representations that let agents interpret CAD files, simulation data, and 3D environments, bridging the gap between raw geometry and engineering decisions. Responsibilities
Develop perception models that extract structure, semantics, and key features from 3D geometry (CAD models, meshes, point clouds, simulation outputs) Create representations and embeddings that agents can use to reason about geometry, physics, and constraints Build multimodal pipelines that integrate visual, geometric, and language-based inputs to provide rich context for LLM-driven agents Work with simulation and CAD data to build datasets, labeling pipelines, and training workflows tailored to engineering contexts Collaborate closely with agent, frontend, and infra teams to deploy perception models and ensure fast, scalable inference Explore and implement techniques such as 3D transformers, geometric deep learning, diffusion models, and structured reasoning Help shape the research direction for context-aware AI agents operating in engineering domains Qualifications
Required: Experience in 3D perception or scene understanding, ideally from robotics, autonomous vehicles, AR/VR, or similar domains Strong ML engineering skills in Python and PyTorch (or equivalent frameworks) Familiarity with 3D data formats (e.g., meshes, point clouds, SDFs, occupancy grids, STL/STEP/CAD) Knowledge of neural networks for geometric data (e.g., PointNet++, 3D CNNs, graph neural networks, or NeRF-style models) Experience building and deploying ML models end-to-end, from data curation to evaluation and inference Comfort working in fast-paced, ambiguous environments and shipping without perfect specs Preferred: Background in mechanical, aerospace, or robotics engineering Experience integrating perception models with LLMs, planners, or task agents Prior work with engineering simulation data (e.g., CFD, FEA) or CAD tools Experience with multi-agent systems, reinforcement learning, or real-time perception Interest in pushing the boundaries of how AI understands the physical world Why This Role Matters
For AI agents to reason about physical systems, they need to see and understand geometry like an engineer. This role is foundational to giving our agents that capability. You'll help define how structure, spatial relationships, and physical constraints are represented and usedenabling AI to go beyond text and truly grasp the design space. What We Offer
Competitive compensation, including salary and equity Direct exposure to high-impact technical problems in aerospace, automotive, and advanced manufacturing Opportunity to help define a new category of engineering software