Rise Technical Recruitment Limited
Founding Senior Backend Engineer
Rise Technical Recruitment Limited, San Francisco, California, United States, 94199
Founding Senior Backend Engineer (AI Tooling)
$175,000 - $200,000 + 0.5% to 3% equity + Benefits + Relocation Assistance + PTO
San Francisco, CA - Hybrid
Are you an engineer who values high autonomy and looking to join a fast-moving startup at the ground floor? Do you want to help build core infrastructure that powers the next generation of AI applications used by thousands of developers worldwide? Backed by top VC's, and with its existing client portfolio including enterprises such as Netflix and Adobe, they are now looking to rapidly expand the platform and supercharge their growth even further. This is an exciting opportunity to join a profitable, fast-growing AI startup already on track for $10M ARR by 2026. The team is building a unified platform that lets developers seamlessly integrate with multiple large language models through a single, high-performance API. With strong enterprise adoption and rapidly growing demand, they are pushing the limits of throughput, latency, and scalability in production systems. As part of a lean, high-calibre team, you'll work directly with the founding leadership on critical systems. You'll take ownership of features end-to-end, help scale the platform to millions of requests, and build infrastructure that developers rely on every day. You'll help improve performance and reliability by upgrading core systems for faster throughput, adding support for the latest LLM features, handling provider-specific quirks, scaling analytics to handle millions of logs, and building tools to track usage and costs-all while working across a modern stack that includes Python, FastAPI, JavaScript/TypeScript, Redis, Postgres, and cloud storage. If you're looking to join a stable early-stage startup at its inflection point, offering excellent equity, a strong team, and real autonomy, this is a standout opportunity. The Role: Migrate key systems from httpx to aiohttp for 10× higher throughput Add support for cutting-edge LLM features such as new "thinking" parameters Handle provider-specific quirks such as streaming limitations across APIs Scale aggregate spend computation for 1M+ logs Implement cost tracking and logging across multiple LLM providers Work across a modern stack: Python, FastAPI, JS/TS, Redis, Postgres, S3, GCS, Datadog, and Slack API The Person: 1-2 years of backend or full-stack engineering experience with production systems Proficiency in Python and modern backend frameworks (FastAPI preferred) Experience scaling high-performance infrastructure and distributed systems Passion for open source and engaging with developer communities Strong work ethic and ability to thrive in small, fast-moving teams
#J-18808-Ljbffr
Are you an engineer who values high autonomy and looking to join a fast-moving startup at the ground floor? Do you want to help build core infrastructure that powers the next generation of AI applications used by thousands of developers worldwide? Backed by top VC's, and with its existing client portfolio including enterprises such as Netflix and Adobe, they are now looking to rapidly expand the platform and supercharge their growth even further. This is an exciting opportunity to join a profitable, fast-growing AI startup already on track for $10M ARR by 2026. The team is building a unified platform that lets developers seamlessly integrate with multiple large language models through a single, high-performance API. With strong enterprise adoption and rapidly growing demand, they are pushing the limits of throughput, latency, and scalability in production systems. As part of a lean, high-calibre team, you'll work directly with the founding leadership on critical systems. You'll take ownership of features end-to-end, help scale the platform to millions of requests, and build infrastructure that developers rely on every day. You'll help improve performance and reliability by upgrading core systems for faster throughput, adding support for the latest LLM features, handling provider-specific quirks, scaling analytics to handle millions of logs, and building tools to track usage and costs-all while working across a modern stack that includes Python, FastAPI, JavaScript/TypeScript, Redis, Postgres, and cloud storage. If you're looking to join a stable early-stage startup at its inflection point, offering excellent equity, a strong team, and real autonomy, this is a standout opportunity. The Role: Migrate key systems from httpx to aiohttp for 10× higher throughput Add support for cutting-edge LLM features such as new "thinking" parameters Handle provider-specific quirks such as streaming limitations across APIs Scale aggregate spend computation for 1M+ logs Implement cost tracking and logging across multiple LLM providers Work across a modern stack: Python, FastAPI, JS/TS, Redis, Postgres, S3, GCS, Datadog, and Slack API The Person: 1-2 years of backend or full-stack engineering experience with production systems Proficiency in Python and modern backend frameworks (FastAPI preferred) Experience scaling high-performance infrastructure and distributed systems Passion for open source and engaging with developer communities Strong work ethic and ability to thrive in small, fast-moving teams
#J-18808-Ljbffr