Logo
Naptha AI

Research Scientist (Test Time Compute)

Naptha AI, Austin, Texas, us, 78716

Save Job

Join to apply for the

Research Scientist (Test Time Compute)

role at

Naptha AI Join to apply for the

Research Scientist (Test Time Compute)

role at

Naptha AI Get AI-powered advice on this job and more exclusive features. AI Research Scientist (Test Time Compute) | naptha.ai

About The Role

We are seeking an exceptional AI Research Scientist to join Naptha AI at the ground floor, focusing on advancing the state of the art in test time compute optimization for large language models. In this role, you will be responsible for researching and developing novel approaches to improve inference efficiency, reduce computational requirements, and enhance model performance at deployment. Working directly with our technical team, you will help shape the fundamental architecture of our inference optimization platform.

This role is critical in solving core technical challenges around model compression, efficient inference strategies, and deployment optimization. You will work at the intersection of machine learning, systems optimization, and hardware acceleration to develop practical solutions for real-world model deployment and scaling.

Core Responsibilities

Research & Development

Design and implement novel architectures for efficient model inference Develop frameworks for model compression and quantization Research approaches to optimize test-time computation across different hardware Create efficient protocols for distributed inference and resource management Implement and test new ideas through rapid prototyping

Technical Innovation

Stay at the forefront of developments in ML efficiency and inference optimization Identify and solve key technical challenges in model deployment Develop novel approaches to model compression and acceleration Bridge theoretical research with practical implementation Contribute to the academic community through publications and open source

Platform Development

Help design and implement efficient inference pipelines Develop scalable solutions for model deployment and serving Create tools and frameworks for performance monitoring and optimization Collaborate with engineering team on implementation Build proofs of concept for new optimization techniques

Leadership & Collaboration

Work closely with engineering team to implement research findings Mentor team members on advanced optimization techniques Contribute to technical strategy and roadmap Collaborate with external research partners when appropriate Help evaluate and integrate external research developments

In this role, you're a good fit if you have:

Strong background in machine learning and systems optimization Deep understanding of model compression and efficient inference techniques Hands-on experience with modern ML frameworks and deployment tools Experience with ML infrastructure and hardware acceleration Track record of implementing efficient ML systems Excellent programming skills (Python required, C++/CUDA a plus) Strong analytical and problem-solving abilities PhD in Machine Learning, Computer Science, Mathematics, or equivalent experience is a plus Published research in relevant fields is a plus

Required Technical Experience:

Python programming and ML frameworks (PyTorch, TensorFlow) Experience with model optimization techniques (quantization, pruning, distillation) MLOps and efficient model deployment Hardware acceleration (GPU, TPU optimization) Version control and collaborative development Experience with large language models

About the hiring process:

Initial technical interview Research presentation System design discussion Technical challenge Team collaboration interview

Compensation & Benefits:

Competitive salary with significant equity stake Remote-first work environment Full medical, dental, and vision coverage Flexible PTO policy Learning and development budget Conference and research publication support Home office setup allowance

Additional Notes:

Must be comfortable with ambiguity and rapid iteration typical of pre-seed startups Strong bias for practical implementation of research ideas Passion for advancing the field of efficient ML systems Interest in open source contribution and community engagement

Naptha AI is committed to building a diverse and inclusive workplace. We are an equal opportunity employer and welcome applications from all qualified candidates regardless of background.

Seniority level

Seniority level Entry level Employment type

Employment type Full-time Job function

Job function Other Industries Software Development Referrals increase your chances of interviewing at Naptha AI by 2x Get notified about new Research Scientist jobs in

San Francisco, CA . United States $138,000.00-$192,000.00 1 week ago San Francisco, CA $180,370.00-$212,200.00 16 hours ago Modality Manager - Chromatography Resin (Sales Leadership opportunity)

South San Francisco, CA $150,000.00-$174,000.00 2 weeks ago Internship - Research Scientist (AI Agents)

Research Scientist (Multi-agent Systems)

San Francisco, CA $190,000.00-$300,000.00 8 months ago Research Scientist (Staff / Sr Staff) - Power Markets

Research Scientist - Voice AI Foundations

San Francisco, CA $140,000.00-$195,000.00 1 month ago Senior Machine Learning Scientist, NLP/LLM

Senior Data Scientist, Real World Evidence

San Francisco, CA $158,400.00-$229,125.00 2 weeks ago San Francisco, CA $141,000.00-$225,000.00 1 day ago Brisbane, CA $173,780.00-$263,200.00 1 week ago Staff/Senior Data Scientist, Genomics Algos

Redwood City, CA $100,000.00-$175,000.00 2 weeks ago Redwood City, CA $90,000.00-$150,000.00 2 weeks ago San Francisco, CA $90,000.00-$160,000.00 2 weeks ago We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr