Voltai
About Voltai
Voltai is developing world models, and embodied agents to learn, evaluate, plan, experiment, and interact with the physical world. We are starting out with understanding and building hardware; electronics systems and semiconductors where AI can design and create beyond human cognitive limits.
About The Team Backed by Silicon Valley’s top investors, Stanford University, and CEOs/Presidents of Google, AMD, Broadcom, Marvell, etc.
We are a team of previous Stanford professors, SAIL researchers, Olympiad medalists (IPhO, IOI, etc.), CTOs of Synopsys & GlobalFoundries, Head of Sales & CRO of Cadence, former US Secretary of Defense, National Security Advisor, and Senior Foreign-Policy Advisor to four US presidents.
Post-training In this role, you will post-train frontier models to autonomously perform complex tasks across the semiconductor design and verification pipeline. Models you train will propose and optimize chip architectures, generate and refine RTL code, run simulations, identify verification gaps, and iteratively improve designs — accelerating the pace of semiconductor innovation.
You will collaborate with leading experts in hardware design, verification, and computer architecture to design rich
reinforcement learning environments
that capture the intricacies of chip design workflows. You’ll develop structured reward functions, scaling strategies, and evaluation frameworks that push models toward higher reliability, efficiency, and creativity in semiconductor reasoning.
Your work will directly advance the goal of creating AI systems capable of reasoning about, designing, and verifying next-generation silicon systems.
You might thrive in this role if you have experience:
Creating and scaling RL environments for LLMs or multimodal agents
Building high-quality evaluation datasets and benchmarks for complex reasoning or design tasks
Working closely with domain experts in hardware and verification to define evaluation metrics, constraints, and simulation conditions
Designing reward functions and feedback pipelines that balance correctness, performance, and design efficiency
Running large-scale RL fine-tuning or post-training experiments for frontier models
Applying reinforcement learning or curriculum learning to structured reasoning or symbolic domains
#J-18808-Ljbffr
About The Team Backed by Silicon Valley’s top investors, Stanford University, and CEOs/Presidents of Google, AMD, Broadcom, Marvell, etc.
We are a team of previous Stanford professors, SAIL researchers, Olympiad medalists (IPhO, IOI, etc.), CTOs of Synopsys & GlobalFoundries, Head of Sales & CRO of Cadence, former US Secretary of Defense, National Security Advisor, and Senior Foreign-Policy Advisor to four US presidents.
Post-training In this role, you will post-train frontier models to autonomously perform complex tasks across the semiconductor design and verification pipeline. Models you train will propose and optimize chip architectures, generate and refine RTL code, run simulations, identify verification gaps, and iteratively improve designs — accelerating the pace of semiconductor innovation.
You will collaborate with leading experts in hardware design, verification, and computer architecture to design rich
reinforcement learning environments
that capture the intricacies of chip design workflows. You’ll develop structured reward functions, scaling strategies, and evaluation frameworks that push models toward higher reliability, efficiency, and creativity in semiconductor reasoning.
Your work will directly advance the goal of creating AI systems capable of reasoning about, designing, and verifying next-generation silicon systems.
You might thrive in this role if you have experience:
Creating and scaling RL environments for LLMs or multimodal agents
Building high-quality evaluation datasets and benchmarks for complex reasoning or design tasks
Working closely with domain experts in hardware and verification to define evaluation metrics, constraints, and simulation conditions
Designing reward functions and feedback pipelines that balance correctness, performance, and design efficiency
Running large-scale RL fine-tuning or post-training experiments for frontier models
Applying reinforcement learning or curriculum learning to structured reasoning or symbolic domains
#J-18808-Ljbffr