Datawizz
The gap between a company exploring specialized models and continuously delivering them into production is massive. Datawizz’s platform closes that gap. We are hiring a Field AI Engineer to help facilitate custom adoption. You will be a founding team member in a hybrid role—part AI Engineer, part Solutions Architect. You won't just help facilitate Go-To-Market or customer onboarding; you will be involved in writing the code that makes the Datawizz platform work best for each customer’s unique data stack.
About Datawizz Datawizz accelerates the transition to specialized models—covering the entire lifecycle from data collection, decomposition, and prompt engineering to SFT (Supervised Fine‑Tuning), RFT (Reinforcement Fine‑Tuning), evaluation, deployment, routing, guardrails, and run‑time observability. Our platform provides the data pipe that allows AI engineering teams to close the loop faster and continuously improve their specialized models.
About the Job As a Forward Deployed AI Engineer, you will manage deploying Datawizz’s platform into customers’ environments and advise them on how to optimize their AI Ops workflow. You won't just install software; you will co‑design the architecture for model improvement. Specific deployment requirements, augmented data collection, configuring evaluators to capture feedback signals, and co‑designing evaluations for trained, specialized models are all areas where you will support our customers.
Delivery is urgent. Datawizz supports customers driving immediate business impact—we understand customer problems, structure delivery, and ship fast. You will also be asked to identify use cases and share field signals with our founding and product teams that will drive the Datawizz roadmap.
This role is based in San Francisco. We work in the office 5 days per week at 360 Pine St. “Burst” travel for onsite customer integration sprints is to be expected.
Key Responsibilities
Orchestrate high‑stakes deployments:
Lead the technical implementation of the Datawizz platform on our cloud or within customer environments (on‑prem, VPC, or hybrid). Own architecture of the "last mile" to ensure integration with vector stores, inference servers (e.g., vLLM, TGI), and orchestration layers.
Code the "missing link":
Build production‑grade code (Python/Go) for custom data ingestion connectors, complex request routing logic, or programmable guardrails to unblock specific use cases.
Drive model alignment & optimization:
Act as a trusted technical advisor on the model lifecycle. Guide customers from prompt engineering to rigorous engineering—helping them create "Golden Datasets", configure feedback loops for SFT/RFT, and implement preference optimization strategies.
Design domain‑specific evaluations:
Co‑design evaluators and "LLM‑as‑a‑Judge" workflows that prove specialized model performance within Datawizz.
Field‑to‑product feedback loop:
Aggregate field signals—identify patterns where customers struggle with convergence or evaluation metrics—and translate them into actionable roadmap items for the core engineering team.
Candidate Profile Technical DNA
Production engineering background:
3+ years of experience in backend or ML infra. Fluent in Python and comfortable navigating complex, asynchronous codebases.
Deep LLM lifecycle fluency:
Understand mechanics of model alignment and data requirements for SFT, RFT, and preference optimization (e.g., DPO/PPO).
Evaluation methodology:
Experience implementing rigorous evaluation frameworks, defining custom rubrics, and analyzing precision/recall in generative contexts.
Infrastructure competence:
Comfortable debugging deployment issues involving Docker, Kubernetes, and cloud networking (AWS/GCP/Azure). Troubleshoot container connectivity with vector databases or inference endpoints.
Professional DNA
The "founding" mindset:
Comfortable with ambiguity and urgency; can start working without a ticket.
Consultative communication:
Able to stand in front of a customer’s Head of AI, understand their state, and quickly outline how Datawizz can help.
High agency:
Willing to do what it takes—writing documentation, fixing bugs, being onsite with a customer on short notice.
#J-18808-Ljbffr
About Datawizz Datawizz accelerates the transition to specialized models—covering the entire lifecycle from data collection, decomposition, and prompt engineering to SFT (Supervised Fine‑Tuning), RFT (Reinforcement Fine‑Tuning), evaluation, deployment, routing, guardrails, and run‑time observability. Our platform provides the data pipe that allows AI engineering teams to close the loop faster and continuously improve their specialized models.
About the Job As a Forward Deployed AI Engineer, you will manage deploying Datawizz’s platform into customers’ environments and advise them on how to optimize their AI Ops workflow. You won't just install software; you will co‑design the architecture for model improvement. Specific deployment requirements, augmented data collection, configuring evaluators to capture feedback signals, and co‑designing evaluations for trained, specialized models are all areas where you will support our customers.
Delivery is urgent. Datawizz supports customers driving immediate business impact—we understand customer problems, structure delivery, and ship fast. You will also be asked to identify use cases and share field signals with our founding and product teams that will drive the Datawizz roadmap.
This role is based in San Francisco. We work in the office 5 days per week at 360 Pine St. “Burst” travel for onsite customer integration sprints is to be expected.
Key Responsibilities
Orchestrate high‑stakes deployments:
Lead the technical implementation of the Datawizz platform on our cloud or within customer environments (on‑prem, VPC, or hybrid). Own architecture of the "last mile" to ensure integration with vector stores, inference servers (e.g., vLLM, TGI), and orchestration layers.
Code the "missing link":
Build production‑grade code (Python/Go) for custom data ingestion connectors, complex request routing logic, or programmable guardrails to unblock specific use cases.
Drive model alignment & optimization:
Act as a trusted technical advisor on the model lifecycle. Guide customers from prompt engineering to rigorous engineering—helping them create "Golden Datasets", configure feedback loops for SFT/RFT, and implement preference optimization strategies.
Design domain‑specific evaluations:
Co‑design evaluators and "LLM‑as‑a‑Judge" workflows that prove specialized model performance within Datawizz.
Field‑to‑product feedback loop:
Aggregate field signals—identify patterns where customers struggle with convergence or evaluation metrics—and translate them into actionable roadmap items for the core engineering team.
Candidate Profile Technical DNA
Production engineering background:
3+ years of experience in backend or ML infra. Fluent in Python and comfortable navigating complex, asynchronous codebases.
Deep LLM lifecycle fluency:
Understand mechanics of model alignment and data requirements for SFT, RFT, and preference optimization (e.g., DPO/PPO).
Evaluation methodology:
Experience implementing rigorous evaluation frameworks, defining custom rubrics, and analyzing precision/recall in generative contexts.
Infrastructure competence:
Comfortable debugging deployment issues involving Docker, Kubernetes, and cloud networking (AWS/GCP/Azure). Troubleshoot container connectivity with vector databases or inference endpoints.
Professional DNA
The "founding" mindset:
Comfortable with ambiguity and urgency; can start working without a ticket.
Consultative communication:
Able to stand in front of a customer’s Head of AI, understand their state, and quickly outline how Datawizz can help.
High agency:
Willing to do what it takes—writing documentation, fixing bugs, being onsite with a customer on short notice.
#J-18808-Ljbffr