Logo
Datawizz

Head of Research

Datawizz, San Francisco, California, United States, 94199

Save Job

The Company

Datawizz helps companies reduce LLM costs by 85% while improving accuracy by over 20% by combining distillation, model routing, and pruning to route requests to smaller, more efficient models. We started in 2025 with the mission of making AI efficient, affordable and more accurate than ever before. Datawizz sits between the application and the LLM, automatically logging requests, evaluating them on different models, and training custom SLMs for repeated tasks. Datawizz then automatically routes every request to the best model - significantly reducing costs and improving accuracy. The Role

As Head of Research, you’ll own Datawizz’s research agenda and build an applied research organization focused on making LLMs radically more efficient and accurate. You’ll define the roadmap across model routing, distillation, SLM training, and evaluation; lead hands-on experimentation; and partner closely with engineering to productionize breakthroughs that drive measurable cost and quality wins for customers. You will:

Set the research strategy and roadmap for mixture-of-models routing, distillation, SLM training, and evaluation.

Build, lead, and mentor a high-caliber applied research team.

Design and run rigorous experiments (ablations, offline/online A/Bs), defining clear metrics for cost, accuracy, and latency.

Own our evaluation stack: datasets, benchmarks, human-in-the-loop reviews, and reliability/safety assessments.

Develop novel methods (e.g., mixture-of-experts/routing, DPO/RLHF, quantization, speculative decoding) and ship reference implementations.

Collaborate with engineering to transfer research into production and measure real-world impact.

This role is in-office, 5 days/week, based in San Francisco.

You might be a great fit if you have experience with:

Leading applied ML/NLP research teams and shipping work into production at a startup or high-growth company.

LLM internals and training techniques: distillation/LoRA, DPO/RLHF, routing/MoE, prompt/adapter tuning, and SLM design.

Building evaluation frameworks (task suites, synthetic data, human eval pipelines) and tying metrics to product outcomes.

Large-scale training and inference systems (PyTorch أو JAX; distributed training; inference stacks like vLLM/TensorRT-LLM; quantization/KV-cache optimizations).

Strong coding skills in Python and a bias toward hands-on experimentation and rapid iteration.

Data curation and labeling workflows, with attention to privacy, safety, and robustness.

Communicating research clearly and partnering cross-functionally with engineering and product.

(Nice to have) Publications or notable open-source contributions; patents; early-stage 0→1 experience.

Benefits

Competitive salary, based on experience level (Annual compensation range: $50,000-$500,000)

Meaningful equity

Opportunity to be a founding member of a growing company

#J-18808-Ljbffr