ndimensions labs
Base pay range
$150,000.00/yr - $300,000.00/yr We’re a team of technologists from MIT, the University of Waterloo, and the University of Washington who know how to turn deep tech into products through multiple successful ventures. We are an equal opportunity employer and welcome applicants from all backgrounds. The Role
We are looking for a new team member to push forward the AI component of a new software robotics stack. This role emphasizes innovation, experimentation, and deployment of new methods that connect perception and control — spanning vision, language, and actions for embodied robotics. The current landscape is evolving quickly, with no clear “best” approach. We want someone eager to invent, question assumptions, and bring new models from research into production on real robotic platforms. You’ll work closely with engineers across AI, robotics, and infrastructure to design and scale architectures that unify vision, language, and action into robust, low-latency control loops. About the Job
Full-time position Onsite: Boston or Toronto What You’ll Do
Invent and prototype new architectures for
vision, language, multimodal integration, and control . Train and evaluate
vision-to-action policies , including VLAs and diffusion-based approaches. Adapt and extend existing models (e.g., transformers, vision-language models) for real-world robotic use. Develop datasets and evaluation pipelines that support training, benchmarking, and deployment. Deploy models on real robots, iterating quickly between simulation, lab experiments, and production tasks. What We’re Looking For
Strong background in
machine learning for language, vision and control , with experience training and fine-tuning models. Familiarity with
multimodal learning
and embodied AI (vision-language-action). Engineering skills in
Python , with experience building and scaling ML systems. Hands-on experience with robotics simulation tools (Isaac, Mujoco, Habitat) and/or real-world robot deployment. Curiosity and drive to move ideas from research → experiment → production. Bonus (Not Required)
Prior work with VLAs, diffusion policies, or other policy learning frameworks. Publications at top research venues. Experience in robotics labs, autonomous systems, or applied AI projects. Apply
Send your CV, and links to relevant work (papers, repos, demos) to
careers@ndimensions.xyz Seniority level
Entry level Employment type
Full-time Industries
Software Development
#J-18808-Ljbffr
$150,000.00/yr - $300,000.00/yr We’re a team of technologists from MIT, the University of Waterloo, and the University of Washington who know how to turn deep tech into products through multiple successful ventures. We are an equal opportunity employer and welcome applicants from all backgrounds. The Role
We are looking for a new team member to push forward the AI component of a new software robotics stack. This role emphasizes innovation, experimentation, and deployment of new methods that connect perception and control — spanning vision, language, and actions for embodied robotics. The current landscape is evolving quickly, with no clear “best” approach. We want someone eager to invent, question assumptions, and bring new models from research into production on real robotic platforms. You’ll work closely with engineers across AI, robotics, and infrastructure to design and scale architectures that unify vision, language, and action into robust, low-latency control loops. About the Job
Full-time position Onsite: Boston or Toronto What You’ll Do
Invent and prototype new architectures for
vision, language, multimodal integration, and control . Train and evaluate
vision-to-action policies , including VLAs and diffusion-based approaches. Adapt and extend existing models (e.g., transformers, vision-language models) for real-world robotic use. Develop datasets and evaluation pipelines that support training, benchmarking, and deployment. Deploy models on real robots, iterating quickly between simulation, lab experiments, and production tasks. What We’re Looking For
Strong background in
machine learning for language, vision and control , with experience training and fine-tuning models. Familiarity with
multimodal learning
and embodied AI (vision-language-action). Engineering skills in
Python , with experience building and scaling ML systems. Hands-on experience with robotics simulation tools (Isaac, Mujoco, Habitat) and/or real-world robot deployment. Curiosity and drive to move ideas from research → experiment → production. Bonus (Not Required)
Prior work with VLAs, diffusion policies, or other policy learning frameworks. Publications at top research venues. Experience in robotics labs, autonomous systems, or applied AI projects. Apply
Send your CV, and links to relevant work (papers, repos, demos) to
careers@ndimensions.xyz Seniority level
Entry level Employment type
Full-time Industries
Software Development
#J-18808-Ljbffr