sumersports
As an ML/AI Engineer on the LLMOps Platform team, you’ll build the core infrastructure that powers our AI-first product organization.
You’ll design, implement, and scale the systems that make it possible for product pods to develop, evaluate, and safely deploy LLM-based and multimodal applications — from RAG pipelines and model gateways to eval frameworks and cost-optimized serving.
You’ll work closely with AI app engineers, full stack engineers, and the Deep Learning Research group to ensure every AI system we ship is fast, grounded, and reliable.
Responsibilities
Build and operate the LLM Platform:
Develop model routing, prompt registry, and orchestration services for multi-model workflows.
Integrate external LLM APIs (OpenAI, Anthropic, Mistral) and internal finetuned models.
Enable fast, safe experimentation:
Implement automated evaluation pipelines (offline + online) with golden sets, rubrics, and regression detection.
Support CI/CD for prompt and model changes, with rollback and approval gates.
Collaborate cross-functionally:
Partner with product pods to instrument RAG pipelines and prompt versioning.
Work with deep learning and data teams to integrate structured and unstructured retrieval into LLM workflows.
Optimize performance and cost:
Profile latency, token usage, and caching strategies.
Build observability and monitoring for LLM calls, embeddings, and agent behaviors.
Ensure reliability and safety:
Implement guardrails (toxicity, PII filters, jailbreak detection).
Maintain policy enforcement and audit logging for AI usage.
Qualifications
5+ years of experience in applied ML, NLP, or ML infrastructure engineering. Strong coding skills in Python and experience with frameworks like LangChain, LlamaIndex, or Haystack. Solid understanding of retrieval-augmented generation (RAG), embeddings, vector databases, and evaluation methodologies. Experience deploying models or AI systems in production environments (AWS, GCP, or Azure). Familiarity with prompt management, LLM observability, and CI/CD automation for AI workflows. Nice to Have
Experience with model serving (vLLM, Triton, Ray Serve, KServe). Understanding of LLM evaluation frameworks (OpenAI Evals, Promptfoo, Arize Phoenix, TruLens). Background in sports analytics, data engineering, or multimodal (video/text) systems. Exposure to Responsible AI practices (guardrails, safety evals, fairness testing). Benefits
Competitive Salary and Bonus Plan Comprehensive health insurance plan Retirement savings plan (401k) with company match Remote working environment A flexible, unlimited time off policy Generous paid holiday schedule - 13 in total including Monday after the Super Bowl
#J-18808-Ljbffr
Build and operate the LLM Platform:
Develop model routing, prompt registry, and orchestration services for multi-model workflows.
Integrate external LLM APIs (OpenAI, Anthropic, Mistral) and internal finetuned models.
Enable fast, safe experimentation:
Implement automated evaluation pipelines (offline + online) with golden sets, rubrics, and regression detection.
Support CI/CD for prompt and model changes, with rollback and approval gates.
Collaborate cross-functionally:
Partner with product pods to instrument RAG pipelines and prompt versioning.
Work with deep learning and data teams to integrate structured and unstructured retrieval into LLM workflows.
Optimize performance and cost:
Profile latency, token usage, and caching strategies.
Build observability and monitoring for LLM calls, embeddings, and agent behaviors.
Ensure reliability and safety:
Implement guardrails (toxicity, PII filters, jailbreak detection).
Maintain policy enforcement and audit logging for AI usage.
Qualifications
5+ years of experience in applied ML, NLP, or ML infrastructure engineering. Strong coding skills in Python and experience with frameworks like LangChain, LlamaIndex, or Haystack. Solid understanding of retrieval-augmented generation (RAG), embeddings, vector databases, and evaluation methodologies. Experience deploying models or AI systems in production environments (AWS, GCP, or Azure). Familiarity with prompt management, LLM observability, and CI/CD automation for AI workflows. Nice to Have
Experience with model serving (vLLM, Triton, Ray Serve, KServe). Understanding of LLM evaluation frameworks (OpenAI Evals, Promptfoo, Arize Phoenix, TruLens). Background in sports analytics, data engineering, or multimodal (video/text) systems. Exposure to Responsible AI practices (guardrails, safety evals, fairness testing). Benefits
Competitive Salary and Bonus Plan Comprehensive health insurance plan Retirement savings plan (401k) with company match Remote working environment A flexible, unlimited time off policy Generous paid holiday schedule - 13 in total including Monday after the Super Bowl
#J-18808-Ljbffr