Logo
Ironclad

AI Data Scientist, Evaluation & Insights

Ironclad, San Francisco, California, United States, 94199

Save Job

Ironclad is the leading AI-powered contract lifecycle management platform, processing billions of contracts every year. Global innovators trust Ironclad to transform contracting into a strategic advantage—accelerating revenue, reducing risk, and driving efficiency. We’re building the future of intelligent contracting and writing the narrative for how contracts unlock strategic growth. We’re backed by leading investors like Accel, Sequoia, Y Combinator, and BOND. We’d love for you to join us! About the Role

Ironclad is accelerating its investment in AI to redefine how legal teams manage and understand contracts. As part of this effort, we are hiring an AI Evaluation Engineer to work within our AI Pillar. This role is focused on unlocking insights from our training data, designing feedback loops, and ensuring the continuous improvement of our agentic and ML or LLM-based systems through data-driven evaluation and iteration. You’ll partner closely with AI Engineers and Product Managers to drive better model quality through systematic analysis, experimentation, and the curation of high-leverage datasets. Your work will directly impact the effectiveness of features like Smart Import, contract understanding, and agentic workflows. What You'll Be Doing

Analyze training and evaluation datasets to identify distributional gaps, labeling inconsistencies, and long-tail opportunities.

Design and execute labeling campaigns, including development of golden datasets and annotation guidelines.

Build and maintain dashboards that track model accuracy, regression trends, and product-specific KPIs like success rate or answer helpfulness.

Investigate failure modes via prompt clustering, error taxonomy development, and user intent classification.

Operationalize feedback loops: mine product telemetry and human-in-the-loop reviews for signal, then translate into data-driven model improvement strategies.

Partner with engineers and PMs to run structured A/B tests and human evaluations for new models or features.

Support the development of scalable data and evaluation infrastructure for LLMs and agents.

Work with product, engineering and legal to create clear & transparent processes for the handling of customer data in AI training, fine-tuning and evaluation

About You

Bachelor\'s or Master\'s degree in a quantitative field (e.g., Statistics, Computer Science, Data Science, Applied Math).

1–3 years of experience in applied ML or data science, preferably in NLP or LLM-based applications.

Strong SQL and Python skills; experience with Jupyter, Pandas, and experiment tracking tools.

Comfortable navigating ambiguity, slicing large datasets, and communicating insights clearly to cross-functional stakeholders.

Experience with prompt analysis, clustering, or user behavior modeling is a plus.

Bonus: familiarity with LLM eval techniques, Reinforcement Learning from Human Feedback (RLHF), or agentic system design. Experience with program management.

Why This Role Matters

AI is critical to the value Ironclad customers get from their contracts, allowing their business to manage risk, close revenue faster and operate more effectively. None of this is possible without reliable and accurate data. This role will lead these efforts, becoming a key contributor to the development of AI solutions in an industry that is likely to be transformed by the new generation of models. What We Value

Bias for action and data curiosity

Ownership mindset and team-first attitude

Comfort in fast-paced, iterative environments

Passion for building AI products that solve real-world customer problems

Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.

#J-18808-Ljbffr