Achira
Why Achira
Join a world‑class team of scientists, ML researchers, and engineers working together to make the physical microcosm predictable and reshape the future of drug discovery.
Move beyond the beaten path: we are actively exploring the next frontier of model architectures for AI × chemistry.
Operate at frontier scale: massive compute, massive data, and massive ambition.
Own impactful work end‑to‑end: from ideation to architecture to deployment on large‑scale infrastructure.
Work in an environment that rewards rigor, speed, execution, and an ownership mindset.
About the Role Achira is building best‑in‑class foundation models to solve the most challenging problems in simulation for drug discovery and beyond. Atomistic Foundation simulation models (FSMs) as world models of the physical microcosm span machine learning interaction potentials (MLIPs), neural network potentials (NNPs), and diverse classes of generative models.
We’re looking for an ML Research Engineer (MLRE) who thrives at the intersection of cutting‑edge machine learning and rigorous research workflows. You’ll work hand‑in‑hand with our research scientists to design and scale intelligent training systems that move us beyond today’s architectures and into the next era of ML‑driven molecular modeling.
Your mandate is simple but ambitious: build the foundations for training atomistic simulation models at scale. This means diving deep into architecture, data, optimizers, losses, training metrics, and representation learning — all while building high‑performance systems that unlock the full potential of our models. In this role you will help invent the playbook for pretraining of FSMs akin to current generative AI large‑scale systems and help transform drug discovery.
At Achira, you’ll have the opportunity to pioneer models that understand and simulate the physical world at atomic resolution, with speed and fidelity never before seen.
What You’ll Do
Scale FSM training : Own the development of next‑generation training pipelines for deep simulation models — bringing new ideas from concept to practice with an obsessive eye on fidelity, efficiency, and scale.
Map strategy : Define and iterate on short-, medium-, and long‑term training strategies, tightly aligned with evolving research goals such as model distillation, uncertainty, and multi‑task learning.
Engineer metrics : Build robust training diagnostics and interpretability tools to measure what matters and steer models toward better representations and outcomes.
Debug at depth : Partner with researchers to diagnose training failures and design resilient, reproducible training workflows that scale across datasets and compute.
Tune architectures : Understand model internals deeply enough to shape and adapt architectures for improved training dynamics, inductive biases, and downstream performance.
Explore representations : Work with researchers to investigate representation learning in our unique domain space, including the potential for tokenization and embedding of novel molecular data.
Automate Workflows:
Utilize generative coding tools to accelerate your work and ultimately automate your workflows
About You
You’ve been one or worked closely with ML researchers and understand how to turn scientific goals into engineering execution.
You’ve designed training workflows that let researchers move fast, scaling up sweeps, tracking results, and digging into tricky failures.
You care about the art of training: learning rates, batch norms, weight inits, optimizer schedules — and how they all interplay under the hood.
You’re fluent in PyTorch and comfortable working on distributed cloud setups, like multi‑node, multi‑GPU. Bonus if you’ve explored model compilation, acceleration, distillation.
You’re pragmatic about DevOps, maybe not a full‑time SWE, but you know your way around k8s, SLURM, or similar infra when needed.
You’re energized by uncharted problems and motivated to define new best practices in training world models for the physical microcosm.
You have a sense of relentless urgency and are a natural collaborator who values team success.
You want to work within a well‑funded, bold, talent‑dense organization to do your best work and focus on transformational impact against some of the world’s hardest technical problems.
#J-18808-Ljbffr
Join a world‑class team of scientists, ML researchers, and engineers working together to make the physical microcosm predictable and reshape the future of drug discovery.
Move beyond the beaten path: we are actively exploring the next frontier of model architectures for AI × chemistry.
Operate at frontier scale: massive compute, massive data, and massive ambition.
Own impactful work end‑to‑end: from ideation to architecture to deployment on large‑scale infrastructure.
Work in an environment that rewards rigor, speed, execution, and an ownership mindset.
About the Role Achira is building best‑in‑class foundation models to solve the most challenging problems in simulation for drug discovery and beyond. Atomistic Foundation simulation models (FSMs) as world models of the physical microcosm span machine learning interaction potentials (MLIPs), neural network potentials (NNPs), and diverse classes of generative models.
We’re looking for an ML Research Engineer (MLRE) who thrives at the intersection of cutting‑edge machine learning and rigorous research workflows. You’ll work hand‑in‑hand with our research scientists to design and scale intelligent training systems that move us beyond today’s architectures and into the next era of ML‑driven molecular modeling.
Your mandate is simple but ambitious: build the foundations for training atomistic simulation models at scale. This means diving deep into architecture, data, optimizers, losses, training metrics, and representation learning — all while building high‑performance systems that unlock the full potential of our models. In this role you will help invent the playbook for pretraining of FSMs akin to current generative AI large‑scale systems and help transform drug discovery.
At Achira, you’ll have the opportunity to pioneer models that understand and simulate the physical world at atomic resolution, with speed and fidelity never before seen.
What You’ll Do
Scale FSM training : Own the development of next‑generation training pipelines for deep simulation models — bringing new ideas from concept to practice with an obsessive eye on fidelity, efficiency, and scale.
Map strategy : Define and iterate on short-, medium-, and long‑term training strategies, tightly aligned with evolving research goals such as model distillation, uncertainty, and multi‑task learning.
Engineer metrics : Build robust training diagnostics and interpretability tools to measure what matters and steer models toward better representations and outcomes.
Debug at depth : Partner with researchers to diagnose training failures and design resilient, reproducible training workflows that scale across datasets and compute.
Tune architectures : Understand model internals deeply enough to shape and adapt architectures for improved training dynamics, inductive biases, and downstream performance.
Explore representations : Work with researchers to investigate representation learning in our unique domain space, including the potential for tokenization and embedding of novel molecular data.
Automate Workflows:
Utilize generative coding tools to accelerate your work and ultimately automate your workflows
About You
You’ve been one or worked closely with ML researchers and understand how to turn scientific goals into engineering execution.
You’ve designed training workflows that let researchers move fast, scaling up sweeps, tracking results, and digging into tricky failures.
You care about the art of training: learning rates, batch norms, weight inits, optimizer schedules — and how they all interplay under the hood.
You’re fluent in PyTorch and comfortable working on distributed cloud setups, like multi‑node, multi‑GPU. Bonus if you’ve explored model compilation, acceleration, distillation.
You’re pragmatic about DevOps, maybe not a full‑time SWE, but you know your way around k8s, SLURM, or similar infra when needed.
You’re energized by uncharted problems and motivated to define new best practices in training world models for the physical microcosm.
You have a sense of relentless urgency and are a natural collaborator who values team success.
You want to work within a well‑funded, bold, talent‑dense organization to do your best work and focus on transformational impact against some of the world’s hardest technical problems.
#J-18808-Ljbffr