Logo
Vinci4d

AI Engineer: Generative Geometry for Hardware Design

Vinci4d, Palo Alto, California, United States, 94306

Save Job

About Us

We're building a co-pilot for hardware designers. Our mission is to enable millions of hardware designers and engineers to iterate through designs 1000x faster.

We are building our geometry + physics driven foundation model for each class of part design. Our first model is shipped and we are expanding our capabilities!

Backed by Khosla Ventures and Eclipse Ventures

About You You’ve shipped AI products that operate in high-dimensional, multimodal domains — computer vision, geometry, or simulation-based workflows. You have experience building models that don’t just analyze data, but generate complex, structured outputs under real-world constraints.

You’re comfortable navigating both classic modeling techniques and modern deep learning architectures, and you care about building systems that are principled, testable, and physically meaningful.

What You’ll Work On Design conditional generative models for 3D geometry tailored to hardware design workflows, including mesh-based, parametric (e.g., CAD) and implicit representations

Develop models that generate geometry conditioned on constraints, partial designs, simulation outcomes, or functional requirements.

Support inverse design tasks where the model proposes viable geometries given desired performance or physical behavior

Implement cutting-edge generative architectures for 3D data such as:

Diffusion models for point clouds, voxel grids, or triangle meshes

Neural implicit representations (SDFs, DeepSDF, NeRF variants for shape modeling)

Transformer or autoregressive models for topological and geometric sequence modeling

CAD-aware generation pipelines (sketch-based or parametric component generators)

Develop pipelines for geometry-aware learning and generation combining:

Mesh and geometry processing (remeshing, simplification, subdivision)

Differentiable simulation or physics-informed learning components

Conditioning on design constraints, performance targets, or class-specific priors

Collaborate with domain experts in physics, geometry, and simulation to:

Integrate physical principles and simulation feedback into the generation loop

Ensure designs meet functional, physical, and manufacturability requirements

Translate domain knowledge into data priors, architectural biases, or constraints

Design experiments and benchmarks to evaluate generation quality such as:

Geometry fidelity and resolution

Physical plausibility and constraint satisfaction

Generalization to novel design tasks or unseen part types

Build product-facing generative tools, including:

Auto-complete or correction of partial designs

LLM to CAD generation

Proposal of high-quality geometry variants from a design prompt

Design-space exploration tools guided by downstream simulation outcomes

Own projects end-to-end: rapidly prototype models, test ideas, gather feedback and contribute to production deployment in collaboration with cross-functional teams

Qualifications

4+ years of experience developing and shipping products

Strong background in deep learning, especially applied to 3D or spatial data.

Hands-on experience with mesh generation, implicit surfaces, or neural fields (e.g., NeRF, SDF, DeepSDF, Occupancy Networks).

Experience with the related technologies, libraries, and languages: Python, C++, PyTorch (3D)/TensorFlow/JAX; plus to have GPU programming

Experience with diffusion models for 3D generation

Startup experience is a strong advantage.

Understanding of geometry representations (mesh, voxel, point cloud, NURBS, parametric surfaces).

Familiarity with 3D geometry processing, including mesh handling, surface reconstruction, spatial data structures, and basic topology, to support effective 3D model manipulation and analysis

#J-18808-Ljbffr