Stanford University School of Medicine
Research Scientist - Interpretability (1 Year Fixed Term)
Stanford University School of Medicine, Stanford, California, United States, 94305
Research Scientist - Interpretability (1 Year Fixed Term)
Join the Research Scientist - Interpretability (1 Year Fixed Term) role at Stanford University School of Medicine. The Enigma Project is a research organization based in the Department of Ophthalmology at Stanford University School of Medicine dedicated to understanding the computational principles of natural intelligence using the tools of artificial intelligence. This project aims to create a foundation model of the brain, capturing the relationship between perception, cognition, behavior, and the activity dynamics of the brain. The role combines rigorous engineering practices with cutting-edge research in model interpretability, working at the intersection of neuroscience and artificial intelligence. Role & Responsibilities Lead research initiatives in the mechanistic interpretability of foundation models of the brain Develop novel theoretical frameworks and methods for understanding neural representations Design and guide interpretability studies that bridge artificial and biological neural networks Apply advanced techniques for circuit discovery, feature visualization, and geometric analysis of high-dimensional neural data Collaborate with neuroscientists to connect interpretability findings with biological principles Mentor junior researchers and engineers in interpretability methods Help shape the research agenda of the interpretability team What We Offer An environment in which to pursue fundamental research questions in AI and neuroscience interpretability Access to unique datasets spanning artificial and biological neural networks State-of-the-art computing infrastructure Competitive salary and benefits package Collaborative environment at the intersection of multiple disciplines Location at Stanford University with access to its world-class research community Desired Qualifications Ph.D. in Computer Science, Machine Learning, Computational Neuroscience, or related field plus 2+ years post-Ph.D. research experience At least 2+ years of practical experience in training, fine-tuning, and using multi-modal deep learning models Strong publication record in top-tier machine learning conferences and journals, particularly in areas related to multi-modal modeling Strong programming skills in Python and deep learning frameworks Demonstrated ability to lead research projects and mentor others Preferred Qualifications Background in theoretical neuroscience or computational neuroscience Experience in processing and analyzing large-scale, high-dimensional data of different sources Experience with cloud computing platforms and their machine learning services Familiarity with big data and MLOps platforms Stanford University is an equal employment opportunity and affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other characteristic protected by law.
#J-18808-Ljbffr
Join the Research Scientist - Interpretability (1 Year Fixed Term) role at Stanford University School of Medicine. The Enigma Project is a research organization based in the Department of Ophthalmology at Stanford University School of Medicine dedicated to understanding the computational principles of natural intelligence using the tools of artificial intelligence. This project aims to create a foundation model of the brain, capturing the relationship between perception, cognition, behavior, and the activity dynamics of the brain. The role combines rigorous engineering practices with cutting-edge research in model interpretability, working at the intersection of neuroscience and artificial intelligence. Role & Responsibilities Lead research initiatives in the mechanistic interpretability of foundation models of the brain Develop novel theoretical frameworks and methods for understanding neural representations Design and guide interpretability studies that bridge artificial and biological neural networks Apply advanced techniques for circuit discovery, feature visualization, and geometric analysis of high-dimensional neural data Collaborate with neuroscientists to connect interpretability findings with biological principles Mentor junior researchers and engineers in interpretability methods Help shape the research agenda of the interpretability team What We Offer An environment in which to pursue fundamental research questions in AI and neuroscience interpretability Access to unique datasets spanning artificial and biological neural networks State-of-the-art computing infrastructure Competitive salary and benefits package Collaborative environment at the intersection of multiple disciplines Location at Stanford University with access to its world-class research community Desired Qualifications Ph.D. in Computer Science, Machine Learning, Computational Neuroscience, or related field plus 2+ years post-Ph.D. research experience At least 2+ years of practical experience in training, fine-tuning, and using multi-modal deep learning models Strong publication record in top-tier machine learning conferences and journals, particularly in areas related to multi-modal modeling Strong programming skills in Python and deep learning frameworks Demonstrated ability to lead research projects and mentor others Preferred Qualifications Background in theoretical neuroscience or computational neuroscience Experience in processing and analyzing large-scale, high-dimensional data of different sources Experience with cloud computing platforms and their machine learning services Familiarity with big data and MLOps platforms Stanford University is an equal employment opportunity and affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other characteristic protected by law.
#J-18808-Ljbffr