Logo
Lila Sciences

Machine Learning Scientist, Open-Endedness (Level Flexible)

Lila Sciences, WorkFromHome

Save Job

Machine Learning Scientist, Open-Endedness (Level Flexible)

Join to apply for the Machine Learning Scientist, Open-Endedness (Level Flexible) role at Lila Sciences .

About Lila Sciences
Lila Sciences is the world’s first scientific superintelligence platform and autonomous lab for life, chemistry, and materials science. We are pioneering a new age of boundless discovery by building the capabilities to apply AI to every aspect of the scientific method. We are introducing scientific superintelligence to solve humankind's greatest challenges, enabling scientists to bring forth solutions in human health, climate, and sustainability at a pace and scale never experienced before. Learn more about this mission at .

At Lila, we are uniquely cross-functional and collaborative. We seek individuals with an inclusive mindset and a diversity of thought. Our teams thrive in unstructured and creative environments. All voices are heard because we know that experience comes in many forms, skills are transferable, and passion goes a long way.

If this sounds like an environment you’d love to work in, even if you only have some of the experience listed below, please apply.

Your Impact at Lila

Lila Sciences is seeking experienced, creative, and talented Machine Learning Scientists (Open-Endedness) across Scientist I/II and Senior Scientist levels. Title will be determined by merit and experience level.

Open-Endedness is an emerging area of machine learning that aims to automate never-ending processes of discovery and exploration. The team, led by Ken Stanley, investigates how a continual chain of deep transformative creativity can be maintained that far exceeds current models. The developed systems will go beyond solving predefined problems to conceiving future unimagined directions of science.

We’re seeking a broad range of ML expertise to facilitate unconventional investigations, including pre-training, fine-tuning, RLHF, distillation, mechanistic interpretability, and quality diversity (QD) techniques.

What You'll Be Building

  • Designing, implementing, and modifying generative models (e.g., LLMs, diffusion models, multimodal models) through unconventional pipelines to achieve novel behaviors.
  • Developing unconventional evaluation techniques, including subjective evaluation and interestingness assessment.
  • Investigating, understanding, and visualizing internal representations of large models, including mechanistic interpretability and beyond.
  • Implementing quality diversity (QD) algorithms like MAP-Elites, novelty search, POET, OMNI, and minimal criterion novelty search, potentially updating models in the inner loop.

What You’ll Need To Succeed

  • PhD in quantitative disciplines preferred; self-taught researchers with exceptional achievements will also be considered.
  • Publications in conferences like NeurIPS, ICML, AAAI, ICLR, GECCO, ICCC.
  • Expertise in ML frameworks (PyTorch, TensorFlow, Jax); experience with QD algorithms and neuroevolution algorithms.
  • Experience training and deploying ML models on distributed computing platforms (AWS, GCP, Azure, or clusters).

We’re All In

Lila Sciences is committed to equal employment opportunity regardless of race, color, religion, sex, national origin, sexual orientation, age, disability, gender identity, or veteran status.

A Note to Agencies

Lila Sciences does not accept unsolicited resumes from agencies unless contacted directly by our Talent Acquisition team. Resumes submitted without a signed agreement will become the property of Lila Sciences, and no fees will be owed.

#J-18808-Ljbffr