Spectraforce Technologies
AI Linguist
12 Months Contract
Mountain View or Sunnyvale (Hybrid)
Notes from HM
The role focuses on agentic AI projects, which involve AI systems capable of performing tasks autonomously for users.
The primary responsibility is contributing to the design, development, and evaluation of agentic AI evaluation pipelines.
Intermediate‑level Python skills for automating tasks, preprocessing data, and integrating outputs into dashboards.
Description We are seeking AI Linguists to join our diverse and interdisciplinary team to focus on AI quality, evaluation, and annotation and to contribute with their expertise to the annotation/human judgment pipeline development that lay at the foundation of the client’s AI model development and improvement.
Key Responsibilities
Design and execute solutions for AI quality, evaluation, and annotation.
Leverage Generative AI prompting, human input, and hybrid approaches to develop scalable and high‑quality human judgment workflows.
Conduct data‑driven analysis and provide linguistic insights for project development.
Be part of working groups that develop high‑quality human judgment/annotation/evaluation pipelines.
Research new methodologies for process and product improvement.
Maximize productivity, process efficiency, and quality through streamlined workflows, process standardization, and documentation.
Perform continuous evaluations of agentic products and report findings.
Qualification
B.S./B.A. Degree in Computational Linguistics, Language Technologies, Linguistics, Speech Science, or a related field.
2+ years relevant industry experience.
2+ years experience with large‑scale, end‑to‑end human judgment flows and tools for the training/evaluation of AI models.
2+ years experience breaking down linguistic data requirements into concise, clear instructions and designs.
2+ years experience programming in Python.
Suggested Skills
Willingness to learn technical concepts and new tools and a keen interest in technology.
#J-18808-Ljbffr
Notes from HM
The role focuses on agentic AI projects, which involve AI systems capable of performing tasks autonomously for users.
The primary responsibility is contributing to the design, development, and evaluation of agentic AI evaluation pipelines.
Intermediate‑level Python skills for automating tasks, preprocessing data, and integrating outputs into dashboards.
Description We are seeking AI Linguists to join our diverse and interdisciplinary team to focus on AI quality, evaluation, and annotation and to contribute with their expertise to the annotation/human judgment pipeline development that lay at the foundation of the client’s AI model development and improvement.
Key Responsibilities
Design and execute solutions for AI quality, evaluation, and annotation.
Leverage Generative AI prompting, human input, and hybrid approaches to develop scalable and high‑quality human judgment workflows.
Conduct data‑driven analysis and provide linguistic insights for project development.
Be part of working groups that develop high‑quality human judgment/annotation/evaluation pipelines.
Research new methodologies for process and product improvement.
Maximize productivity, process efficiency, and quality through streamlined workflows, process standardization, and documentation.
Perform continuous evaluations of agentic products and report findings.
Qualification
B.S./B.A. Degree in Computational Linguistics, Language Technologies, Linguistics, Speech Science, or a related field.
2+ years relevant industry experience.
2+ years experience with large‑scale, end‑to‑end human judgment flows and tools for the training/evaluation of AI models.
2+ years experience breaking down linguistic data requirements into concise, clear instructions and designs.
2+ years experience programming in Python.
Suggested Skills
Willingness to learn technical concepts and new tools and a keen interest in technology.
#J-18808-Ljbffr