Logo
Prime Intellect

Member of Technical Staff - Full Stack

Prime Intellect, San Francisco, California, United States, 94199

Save Job

Building the Future of Decentralized AI Development

Prime Intellect is building the open superintelligence stack - from frontier agentic models to the infra that enables anyone to create, train, and deploy them. We aggregate and orchestrate global compute into a single control plane and pair it with the full RL post‑training stack: environments, secure sandboxes, verifiable evals, and our async RL trainer. We enable researchers, startups and enterprises to run end‑to‑end reinforcement learning at frontier scale, adapting models to real tools, workflows, and deployment contexts.

We recently raised $15 million in funding (total of $20 million raised) led by Founders Fund, with participation from Menlo Ventures and prominent angels including Andrej Karpathy (Eureka AI, Tesla, OpenAI), Tri Dao (Chief Scientific Officer of Together AI), Dylan Patel (SemiAnalysis), Clem Delangue (Huggingface), Emad Mostaque (Stability AI) and many others.

Role Impact This is a hybrid role spanning both our developer platform and infrastructure layers. You'll work on two key areas:

Our developer‑facing platform for AI workload management

The underlying distributed infrastructure that powers our training systems

Core Technical Responsibilities Platform Development

Build intuitive web interfaces for AI workload management and monitoring

Develop REST APIs and backend services in Python

Create real‑time monitoring and debugging tools

Implement user‑facing features for resource management and job control

Infrastructure Development

Design and implement distributed training infrastructure in Rust

Build high‑performance networking and coordination components

Create infrastructure automation pipelines with Ansible

Manage cloud resources and container orchestration

Implement scheduling systems for heterogeneous hardware (CPU, GPU, TPU)

Technical Requirements Platform Skills

Strong Python backend development (FastAPI, async)

Modern frontend development (TypeScript, React/Next.js, Tailwind)

Experience building developer tools and dashboards

RESTful API design and implementation

Infrastructure Skills

Systems programming experience with Rust

Infrastructure automation (Ansible, Terraform)

Container orchestration (Kubernetes)

Cloud platform expertise (GCP preferred)

Observability tools (Prometheus, Grafana)

Nice to Have

Experience with GPU computing and ML infrastructure

Knowledge of AI/ML model architecture and training

High‑performance networking implementation

Open‑source infrastructure contributions

WebSocket/real‑time systems experience

What We Offer

Competitive compensation with significant equity incentives

Flexible work arrangement (remote or San Francisco office)

Full visa sponsorship and relocation support

Professional development budget for courses and conferences

Regular team off‑sites and conference attendance

Opportunity to shape the future of decentralized AI development

Growth Opportunity You’ll join a team of experienced engineers and researchers working on cutting‑edge problems in AI infrastructure. We believe in open development and encourage team members to contribute to the broader AI community through research and open‑source contributions.

We value potential over perfection – if you’re passionate about democratizing AI development and have experience in either platform or infrastructure development (ideally both), we want to talk to you.

Ready to help shape the future of AI?

Apply now and join us in our mission to make powerful AI models accessible to everyone.

#J-18808-Ljbffr