Role Summary:
Build the product surface of Mem0 Platform —our memory platform powering LLM apps. You’ll own features end-to-end across Next.js and Python , shipping fast without compromising code quality, performance, or reliability. You’ll partner with design, research, and customers to turn real problems into elegant, scalable product experiences—and you’ll take true ownership of outcomes.
What You'll Do:
Ship end-to-end features: Design APIs, build UIs, write backend logic, own data models, and deploy to production.
Build for scale & speed: Optimize latency, caching, and query patterns; keep pages snappy and backends reliable.
Own quality: Write tests, enforce typing/linting, review PRs, and maintain clean, well-documented code.
Collaborate deeply: Work with Design for great UX, with Research to integrate new memory capabilities, and with customers to refine requirements.
Operate what you build: Add observability, set alerts, debug prod issues, and drive continuous improvements.
Lead with product sense: Prioritize ruthlessly, make tradeoffs explicit, and iterate based on data and feedback.
Go beyond your lane: Unblock teams, learn new tools on the fly, and do what it takes to deliver.
Minimum Qualifications
Proven experience shipping full-stack web applications at scale using Next.js/React and Python.
Strong Python skills and familiarity with modern web stacks (REST/GraphQL, Postgres, Redis, Celery/queues).
Solid front-end chops: component architecture, state management, forms, accessibility, SSR/ISR.
Track record of owning features E2E : from design docs to rollout and post-launch iteration.
Code quality mindset: testing (unit/integration), type safety (TS/pyright/mypy), CI/CD, and thoughtful review culture.
Excellent communication and teamwork; comfortable working cross-functionally with design, research, and GTM.
Comfortable operating production systems (logs, metrics, tracing) and meeting low-latency requirements.
No CS Degree required—we care about impact and craftsmanship .
Nice to Have:
Experience integrating LLM/memory features (RAG, embeddings, vector DBs) into products.
Familiarity with vLLM , model serving, or lightweight ML infra integrations.
Real-time features (WebSockets/Server Actions/streaming), file uploads, and background jobs at scale.
Infra awareness: Docker, IaC, basic K8s/Vercel/AWS/GCP deployment patterns and cost thinking.
Product/UI polish: Tailwind/Design Systems, charts, empty states, error UX, and performance budgets.
Security & privacy basics (PII handling, audit/logging).
About Mem0
We're building the memory layer for AI agents. Think long-term memory that enables AI to remember conversations, learn from interactions, and build context over time. We're already powering millions of AI interactions. We are backed by top-tier investors and are well capitalized.
Our Culture
Office-first collaboration - We're an in-person team in San Francisco. Hallway chats, impromptu whiteboard sessions, and shared meals spark ideas that remote calls can't.
Velocity with craftsmanship - We build for the long term, not just shipping features. We move fast but never sacrifice reliability or thoughtful design - every system needs to be fast, reliable, and elegant.
Extreme ownership - Everyone at Mem0 is a builder-owner. If you spot a problem or opportunity, you have the agency to fix it. Titles are light; impact is heavy.
High bar, high trust - We hire for talent and potential, then give people room to run. Code is reviewed, ideas are challenged, and wins are celebrated—always with respect and curiosity.
Data-driven, not ego-driven – The best solution wins, whether it comes from a founder or an engineer who joined yesterday. We let results and metrics guide our decisions.