Achira
Why Achira
Join a world-class team of scientists, ML researchers, and engineers working together to make the physical microcosm predictable and reshape the future of drug discovery.
Move beyond the beaten path: we are actively exploring the next frontier of model architectures for AI x chemistry.
Operate at frontier scale: massive compute, massive data, and massive ambition.
Own impactful work end-to-end: from ideation to architecture to deployment on large-scale infrastructure.
Work in an environment that rewards rigor, speed, execution, and an ownership mindset.
About the Role Achira is building best-in-class foundation models to solve the most challenging problems in simulation for drug discovery and beyond. Atomistic Foundation simulation models (FSMs) as world models of the physical microcosm span machine learning interaction potentials (MLIPs), neural network potentials (NNPs), and diverse classes of generative models.
We're looking for a rare individual who thrives at the intersection of cutting-edge deep learning architectures and high-performance computing. You will help shape the future of molecular machine learning by engineering high-efficiency implementations of advanced architectures for molecular densities, graph neural networks (GNNs) and beyond — pushing past the limits of what’s currently possible with today's hardware.
Foundation simulation models hold immense promise in material sciences and drug discovery, but remain underutilized. At Achira, you’ll have the opportunity to change that — enabling models that understand and simulate the physical world at atomic resolution, with speed and fidelity never before seen.
What You’ll Do
Architect & Integrate : Implement state-of-the-art Graph Transformers, GNNs, and similar geometric deep learning architectures into production-ready pipelines.
Optimize Deeply : Drive end-to-end performance — from high-level implementations in PyTorch / JAX down to hand-tuned CUDA kernels — to extract maximum throughput, minimize memory footprints, and optimize GPU compute bubbles.
Scale Intelligently : Help scale training and inference workloads across thousands (and eventually tens of thousands) of GPUs, maximizing FLOPs, saturating caches, and pushing hardware to its limits.
Collaborate Closely : Work alongside scientists and ML researchers to identify, evaluate, and develop novel architectures with superior inductive biases for molecular modeling.
Simulate Precisely : Hone our models to simulate molecular systems with unprecedented speed and accuracy — enabling breakthroughs in drug design, protein modeling, and beyond.
Automate Workflows:
Utilize generative coding tools to accelerate your work and ultimately automate your optimization workflows
About You
You’re equally excited writing PyTorch / JAX prototype code or tuning custom CUDA kernels for warp-level parallelism.
You have strong opinions about cache hierarchies, tensor fusion, memory-bound vs compute-bound workloads — and know when to profile rather than guess.
You’re energized by new architectures in the GNN and equivariance space, but never let the code get sloppy — you build for reusability, clarity, and scale.
You’re curious about bleeding-edge as well as established technologies like Triton, TensorRT, TorchInductor, and NVIDIA Warp, and not afraid to dive in and try them out.
You believe performance is a feature — and love seeing models train and infer faster.
You have a sense of relentless urgency and are a natural collaborator who values team success.
You want to work within a well-funded, bold, talent-dense organization to do your best work and focus on transformational impact against some of the world’s hardest technical problems.
#J-18808-Ljbffr
Join a world-class team of scientists, ML researchers, and engineers working together to make the physical microcosm predictable and reshape the future of drug discovery.
Move beyond the beaten path: we are actively exploring the next frontier of model architectures for AI x chemistry.
Operate at frontier scale: massive compute, massive data, and massive ambition.
Own impactful work end-to-end: from ideation to architecture to deployment on large-scale infrastructure.
Work in an environment that rewards rigor, speed, execution, and an ownership mindset.
About the Role Achira is building best-in-class foundation models to solve the most challenging problems in simulation for drug discovery and beyond. Atomistic Foundation simulation models (FSMs) as world models of the physical microcosm span machine learning interaction potentials (MLIPs), neural network potentials (NNPs), and diverse classes of generative models.
We're looking for a rare individual who thrives at the intersection of cutting-edge deep learning architectures and high-performance computing. You will help shape the future of molecular machine learning by engineering high-efficiency implementations of advanced architectures for molecular densities, graph neural networks (GNNs) and beyond — pushing past the limits of what’s currently possible with today's hardware.
Foundation simulation models hold immense promise in material sciences and drug discovery, but remain underutilized. At Achira, you’ll have the opportunity to change that — enabling models that understand and simulate the physical world at atomic resolution, with speed and fidelity never before seen.
What You’ll Do
Architect & Integrate : Implement state-of-the-art Graph Transformers, GNNs, and similar geometric deep learning architectures into production-ready pipelines.
Optimize Deeply : Drive end-to-end performance — from high-level implementations in PyTorch / JAX down to hand-tuned CUDA kernels — to extract maximum throughput, minimize memory footprints, and optimize GPU compute bubbles.
Scale Intelligently : Help scale training and inference workloads across thousands (and eventually tens of thousands) of GPUs, maximizing FLOPs, saturating caches, and pushing hardware to its limits.
Collaborate Closely : Work alongside scientists and ML researchers to identify, evaluate, and develop novel architectures with superior inductive biases for molecular modeling.
Simulate Precisely : Hone our models to simulate molecular systems with unprecedented speed and accuracy — enabling breakthroughs in drug design, protein modeling, and beyond.
Automate Workflows:
Utilize generative coding tools to accelerate your work and ultimately automate your optimization workflows
About You
You’re equally excited writing PyTorch / JAX prototype code or tuning custom CUDA kernels for warp-level parallelism.
You have strong opinions about cache hierarchies, tensor fusion, memory-bound vs compute-bound workloads — and know when to profile rather than guess.
You’re energized by new architectures in the GNN and equivariance space, but never let the code get sloppy — you build for reusability, clarity, and scale.
You’re curious about bleeding-edge as well as established technologies like Triton, TensorRT, TorchInductor, and NVIDIA Warp, and not afraid to dive in and try them out.
You believe performance is a feature — and love seeing models train and infer faster.
You have a sense of relentless urgency and are a natural collaborator who values team success.
You want to work within a well-funded, bold, talent-dense organization to do your best work and focus on transformational impact against some of the world’s hardest technical problems.
#J-18808-Ljbffr