FAR.AI
Overview
Join to apply for the
Research Engineer
role at
FAR.AI Base pay range:
$80,000.00/yr - $175,000.00/yr About FAR.AI
FAR.AI is a non-profit AI research institute dedicated to ensuring advanced AI is safe and beneficial for everyone. Our mission is to facilitate breakthrough AI safety research, advance global understanding of AI risks and solutions, and foster a coordinated global response. Since our founding in July 2022, we\'ve grown quickly to 20+ staff, producing 30 influential academic papers, and established the leading AI Safety events for research and international cooperation. Our work is recognized globally, with publications at premier venues such as NeurIPS, ICML, and ICLR, and features in the Financial Times, Nature News, and MIT Technology Review. We drive practical change through red-teaming with frontier model developers and government institutes. Additionally, we help steer and grow the AI safety field through developing research roadmaps with renowned researchers and operating FAR.Labs, an AI safety-focused co-working space in Berkeley. We also support the community through targeted grants to technical researchers. About FAR.Research Our research team moves fast. We explore promising research directions in AI safety and scale up only those showing high potential for impact. FAR.AI pursues a diverse portfolio of projects to advance the field. Role and responsibilities
You will collaborate closely with research advisers and research scientists inside and outside FAR.AI. As a research engineer, you will develop scalable implementations of machine learning algorithms and use them to run scientific experiments. You will be involved in the write-up of results and credited as an author in submissions to peer-reviewed venues (e.g. NeurIPS, ICLR, JMLR). General expectations include: Flexibility:
focus on research engineering but contribute to all aspects of the research project. Help shape the research direction, analyze experimental results, and participate in the write-up of results. Variety:
work on projects using a range of technical approaches and have opportunities to contribute to different research agendas over time. Collaboration:
regularly work with collaborators from different academic labs and research institutions. Mentorship:
develop research taste through regular project meetings and improve programming style through code reviews. Autonomy:
be highly self-directed and spend time studying machine learning and developing high-level views on AI safety research. About You
This role is suitable for someone aiming to gain hands-on machine learning engineering experience while exploring AI safety research. Applicants might be looking to grow an existing portfolio of ML research or transition to AI safety from a software engineering background. Essential qualifications: Significant software engineering experience or experience applying machine learning methods.
Evidence may include prior work experience, open-source contributions, or academic publications. Experience with at least one object-oriented programming language (preferably Python). Results-oriented and motivated by impactful research. Preferred qualifications: Experience with common ML frameworks like PyTorch or TensorFlow. Experience in natural language processing or reinforcement learning. Operating system internals and distributed systems. Publications or open-source software contributions. Basic linear algebra, calculus, vector probability, and statistics. Projects
As a Research Engineer you would lead collaborations and contribute to multiple projects, including: Scaling laws for prompt injections and exploration of model/data scale effects on robustness. Robustness of advanced AI systems, including adversarial training and architectural improvements. Mechanistic interpretability for mesa-optimization to audit model goals. Red-teaming of frontier models to test vulnerabilities prior to deployment. Logistics
You will be an employee of FAR.AI, a 501(c)(3) research non-profit. Location: Remote and/or in-person (Berkeley, CA). We sponsor visas for in-person employees and can hire remotely in most countries. Hours: Full-time (40 hours/week). Compensation: $80,000-$175,000/year depending on experience and location. We cover work-related travel and equipment expenses and provide catered meals at our Berkeley offices. Application process: A 72-minute programming assessment, a short screening call, two 1-hour interviews, and a 1-2 week paid work trial. Alternatives may be available if a work trial is not possible. If you have any questions about the role, please contact talent@far.ai.
#J-18808-Ljbffr
Join to apply for the
Research Engineer
role at
FAR.AI Base pay range:
$80,000.00/yr - $175,000.00/yr About FAR.AI
FAR.AI is a non-profit AI research institute dedicated to ensuring advanced AI is safe and beneficial for everyone. Our mission is to facilitate breakthrough AI safety research, advance global understanding of AI risks and solutions, and foster a coordinated global response. Since our founding in July 2022, we\'ve grown quickly to 20+ staff, producing 30 influential academic papers, and established the leading AI Safety events for research and international cooperation. Our work is recognized globally, with publications at premier venues such as NeurIPS, ICML, and ICLR, and features in the Financial Times, Nature News, and MIT Technology Review. We drive practical change through red-teaming with frontier model developers and government institutes. Additionally, we help steer and grow the AI safety field through developing research roadmaps with renowned researchers and operating FAR.Labs, an AI safety-focused co-working space in Berkeley. We also support the community through targeted grants to technical researchers. About FAR.Research Our research team moves fast. We explore promising research directions in AI safety and scale up only those showing high potential for impact. FAR.AI pursues a diverse portfolio of projects to advance the field. Role and responsibilities
You will collaborate closely with research advisers and research scientists inside and outside FAR.AI. As a research engineer, you will develop scalable implementations of machine learning algorithms and use them to run scientific experiments. You will be involved in the write-up of results and credited as an author in submissions to peer-reviewed venues (e.g. NeurIPS, ICLR, JMLR). General expectations include: Flexibility:
focus on research engineering but contribute to all aspects of the research project. Help shape the research direction, analyze experimental results, and participate in the write-up of results. Variety:
work on projects using a range of technical approaches and have opportunities to contribute to different research agendas over time. Collaboration:
regularly work with collaborators from different academic labs and research institutions. Mentorship:
develop research taste through regular project meetings and improve programming style through code reviews. Autonomy:
be highly self-directed and spend time studying machine learning and developing high-level views on AI safety research. About You
This role is suitable for someone aiming to gain hands-on machine learning engineering experience while exploring AI safety research. Applicants might be looking to grow an existing portfolio of ML research or transition to AI safety from a software engineering background. Essential qualifications: Significant software engineering experience or experience applying machine learning methods.
Evidence may include prior work experience, open-source contributions, or academic publications. Experience with at least one object-oriented programming language (preferably Python). Results-oriented and motivated by impactful research. Preferred qualifications: Experience with common ML frameworks like PyTorch or TensorFlow. Experience in natural language processing or reinforcement learning. Operating system internals and distributed systems. Publications or open-source software contributions. Basic linear algebra, calculus, vector probability, and statistics. Projects
As a Research Engineer you would lead collaborations and contribute to multiple projects, including: Scaling laws for prompt injections and exploration of model/data scale effects on robustness. Robustness of advanced AI systems, including adversarial training and architectural improvements. Mechanistic interpretability for mesa-optimization to audit model goals. Red-teaming of frontier models to test vulnerabilities prior to deployment. Logistics
You will be an employee of FAR.AI, a 501(c)(3) research non-profit. Location: Remote and/or in-person (Berkeley, CA). We sponsor visas for in-person employees and can hire remotely in most countries. Hours: Full-time (40 hours/week). Compensation: $80,000-$175,000/year depending on experience and location. We cover work-related travel and equipment expenses and provide catered meals at our Berkeley offices. Application process: A 72-minute programming assessment, a short screening call, two 1-hour interviews, and a 1-2 week paid work trial. Alternatives may be available if a work trial is not possible. If you have any questions about the role, please contact talent@far.ai.
#J-18808-Ljbffr