Logo
mundane

vp, teleportation

mundane, Palo Alto, California, United States, 94306

Save Job

Mundane is a venture-backed seed-stage robot learning startup founded by a team of Stanford researchers and builders. We’re deploying a massive fleet of humanoid robots to perform mundane tasks in commercial environments, collecting data to build the next generation of embodied intelligence. We’re a fast-paced, execution-driven team of engineers, roboticists, and dreamers. Our mission is simple and audacious:

build robots that feel human to control

— systems that extend human intent into the physical world with immediacy, precision, and grace. About the Role

As

VP of Teleportation , you will lead the design of the sensory and cognitive bridge that connects humans to Mundane’s humanoid robots — a telepresence interface so seamless that distance disappears. You’ll architect the systems that let operators

see, feel, and act through robots as if they were there themselves . Your work will unify cutting-edge visualization, human-computer interaction, and robotic control into an integrated experience that defines the future of human-AI symbiosis. This is not a design role. It’s a frontier research and systems leadership position — one that spans

computer vision, HCI, telepresence, XR, and low-latency distributed systems . You’ll collaborate directly with Mundane’s founders, robotics engineers, and AI researchers to invent the interaction layer that makes humanoid intelligence deployable at scale. Responsibilities

Architect and lead

the development of Mundane’s teleoperation and telepresence stack — spanning perception, visualization, and interaction design. Define and prototype new paradigms for

human–robot shared embodiment , combining vision, control, and cognitive feedback in real time. Lead applied research on

operator performance, situational awareness, and embodied cognition

in teleoperation. Build high-fidelity visualization pipelines integrating multi-sensor inputs — RGB, depth, LiDAR, and proprioceptive state. Collaborate with ML and controls teams to develop

closed-loop learning systems

that fuse human feedback with robot adaptation. Explore

VR/XR and spatial computing interfaces

for immersive control and situational presence. Publish and present frontier work in

HRI, CVPR, ICRA, SIGGRAPH, CHI , or equivalent venues to help define the state of the art in embodied telepresence. Qualifications

Must-Haves

Advanced background in

Human–Computer Interaction ,

Computer Vision ,

Robotics , or

Cognitive Systems . Proven experience building or leading

teleoperation, XR, or perception-based interaction systems

at scale. Strong expertise in

real-time rendering ,

image processing , or

sensor fusion

(Unity, Unreal, CUDA, OpenCV, ROS2). Deep understanding of

low-latency streaming architectures

and human perceptual thresholds (WebRTC, custom protocols, etc.). Ability to drive

end-to-end system design

— from experiment to deployment — across interdisciplinary research teams. Bonus

Prior leadership at a frontier lab (Meta Reality Labs, Apple Vision Products, NVIDIA Robotics, Boston Dynamics AI Institute, etc.). Experience with

VR telepresence, neural rendering, or world models

for shared autonomy. Published work in

HRI, IROS, ICRA, CoRL, SIGGRAPH, or CHI . Track record of building teams that bridge

research and productization

of embodied or immersive systems. What You’ll Get

Full ownership of the

human–robot teleportation layer

— the interface that defines how intelligence moves between human and machine. Early equity with meaningful upside in a venture-backed company building the foundation of embodied AI. The opportunity to deploy your work across a

real humanoid robot fleet , not simulations. Influence over the design of the company’s

next-generation learning systems

— where teleoperation becomes co-learning. The chance to define the

visual and cognitive language of human–AI symbiosis .

#J-18808-Ljbffr