Synagi
At Synagi, we are pushing the frontier of distributed and decentralised AI agents. Our research spans vector-driven retrieval systems, agentic swarms, and resource-efficient multi-agent architectures—all with a sharp focus on real-world performance and human-in-the-loop alignment. We explore scalable, context-aware multi-agent designs that outperform monolithic approaches and keep compute costs in check.
Role
You will own the AI layer that powers Synagi's agents—from vector databases and retrieval-augmented generation (RAG) pipelines to fine-tuning compact transformer models and classic ML solutions where they make sense. Your work will turn raw data into fast, reliable intelligence that scales with our product ambitions. Core Responsibilities
Vector databases – Design schemas, sharding strategies, and ANN indexes (Milvus, Vespa, or pgvector) to store and query billions of embeddings. RAG pipelines – Build and maintain end-to-end retrieval workflows: query rewriting, hybrid BM25 + vector search, and re-ranking for fact-grounded answers. Model creation & fine-tuning – Train or adapt lightweight transformer models using techniques such as LoRA; develop classic ML models when they outperform deep nets. MLOps – Containerise AI workloads with Docker, deploy and scale them on Kubernetes, and automate training/evaluation workflows. Must-Have Qualifications
3+ years of production machine learning experience in Python ; shipped at least one transformer-based model. Proficient with PyTorch or JAX for custom model development and fine-tuning. Practical experience with vector databases and RAG techniques. Comfortable with CUDA tooling for debugging and optimising GPU workloads. Able to design and train ML models from scratch for small-parameter or classical ML tasks. Nice-to-Haves
Experience with DeepSpeed or vLLM for efficient inference serving. Familiarity with LangChain or LlamaIndex for rapid agent prototyping. Interest in decentralised or edge deployments (e.g., WASM at the edge) for ultra-low-latency inference. Applying
Send your resume—plus a short note (3–5 sentences) describing a production system you scaled or a performance bug you crushed—to garv.s.rawlot@gmail.com . We offer highly competitive salary, early-stage equity, and an opportunity to be the backbone of synergetic general intelligence. Synagi is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
#J-18808-Ljbffr
You will own the AI layer that powers Synagi's agents—from vector databases and retrieval-augmented generation (RAG) pipelines to fine-tuning compact transformer models and classic ML solutions where they make sense. Your work will turn raw data into fast, reliable intelligence that scales with our product ambitions. Core Responsibilities
Vector databases – Design schemas, sharding strategies, and ANN indexes (Milvus, Vespa, or pgvector) to store and query billions of embeddings. RAG pipelines – Build and maintain end-to-end retrieval workflows: query rewriting, hybrid BM25 + vector search, and re-ranking for fact-grounded answers. Model creation & fine-tuning – Train or adapt lightweight transformer models using techniques such as LoRA; develop classic ML models when they outperform deep nets. MLOps – Containerise AI workloads with Docker, deploy and scale them on Kubernetes, and automate training/evaluation workflows. Must-Have Qualifications
3+ years of production machine learning experience in Python ; shipped at least one transformer-based model. Proficient with PyTorch or JAX for custom model development and fine-tuning. Practical experience with vector databases and RAG techniques. Comfortable with CUDA tooling for debugging and optimising GPU workloads. Able to design and train ML models from scratch for small-parameter or classical ML tasks. Nice-to-Haves
Experience with DeepSpeed or vLLM for efficient inference serving. Familiarity with LangChain or LlamaIndex for rapid agent prototyping. Interest in decentralised or edge deployments (e.g., WASM at the edge) for ultra-low-latency inference. Applying
Send your resume—plus a short note (3–5 sentences) describing a production system you scaled or a performance bug you crushed—to garv.s.rawlot@gmail.com . We offer highly competitive salary, early-stage equity, and an opportunity to be the backbone of synergetic general intelligence. Synagi is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
#J-18808-Ljbffr