Zendesk
Machine Learning Engineer II – San Francisco, CA
Zendesk, San Francisco, California, United States, 94199
Overview
Machine Learning Engineer II (GenAI & LLM Infrastructure) Location: San Francisco (Hybrid) Zendesk’s people have one goal in mind: to make Customer Experience better. Our products help more than 125,000 global brands (AirBnb, Uber, JetBrains, Slack, among others) make their billions of customers happy, every day. The AI/ML Platform team is at the forefront of this mission. We build the foundation that powers every AI-driven experience at Zendesk, enabling product teams to build, evaluate, and deploy state-of-the-art Large Language Model (LLM) applications reliably and at scale. We’re looking for an ML Engineer II to help design, implement, and improve core components of Zendesk’s GenAI infrastructure — from LLM Proxy and evaluation frameworks to orchestration tools for AI agents. You’ll work alongside senior engineers and technical leads, contributing to systems that are safe, scalable, and developer-friendly. This is a hands-on engineering role with significant growth potential, ideal for someone with solid ML engineering skills who wants to deepen expertise in LLM systems and GenAI infrastructure. What you get to do every day
Contribute to the development of Zendesk’s LLM Proxy, enabling secure, cost-optimized access to multiple foundation models. Help develop and maintain benchmarking and A/B testing frameworks for measuring LLM performance, latency, and cost. Assist in building orchestration systems that enable multi-step, tool-using AI agents. Work closely with applied ML, product, and platform teams to ensure infrastructure meets product needs. Deliver well-tested, maintainable, and performant code ready for production deployment. What you bring to the role
2–4 years of hands-on experience developing and deploying ML systems or infrastructure. Familiarity with LLM applications, vector databases, or ML/AI infrastructure components. Exposure to AWS, GCP, or Azure; understanding of Kubernetes, Docker, or similar containerized environments. Proficiency in Python and familiarity with at least one other backend language (Java, Scala, Golang, or Ruby). Understanding of CI/CD workflows, testing frameworks, and software design principles. Ability to work effectively in cross-functional teams and take feedback constructively. Preferred Qualifications
Experience building small-scale ML services or APIs. Exposure to ML observability tools and evaluation frameworks. Understanding of prompt engineering or fine-tuning workflows for LLMs. What our tech stack looks like
Our code is written in Python. Our servers live in AWS. LLM Vendors: OpenAI, Anthropic, Google, Llama Infra: Kubernetes, Docker, Kafka, AWS What we offer
Full ownership of the projects you work on. What you will be doing will have a huge impact. Team of passionate people who love what they do. Exciting projects, ability to implement your own ideas and improvements. Opportunity to learn and grow. US annualized base salary range is $110,000.00-$166,000.00. This role may be eligible for bonus, benefits, or related incentives. Offer level depends on job-related capabilities, experience, and other factors such as work location. Compensation details reflect base salary only and do not include bonus or incentives. Hybrid: Our hybrid experience is designed to provide a rich onsite experience while allowing remote work part of the week. The specific in-office schedule is determined by the hiring manager. The intelligent heart of customer experience
Zendesk software was built to bring a sense of calm to the chaotic world of customer service. Today we power billions of conversations with brands you know and love. Zendesk believes in offering our people a fulfilling and inclusive experience. Our hybrid way of working enables us to come together in person at Zendesk offices around the world, while also giving flexibility to work remotely for part of the week. Zendesk is an equal opportunity employer. We foster global diversity, equity, & inclusion in the workplace. Individuals seeking employment and employees are considered without regard to race, color, religion, national origin, age, sex, gender, gender identity, gender expression, sexual orientation, marital status, medical condition, ancestry, disability, military or veteran status, or any other characteristic protected by applicable law. We are an AA/EEO/Veterans/Disabled employer. If you are based in the United States and would like more information about your EEO rights under the law, please contact us. Zendesk endeavors to make reasonable accommodations for applicants with disabilities and disabled veterans pursuant to applicable federal and state law. If you require a reasonable accommodation to submit this application or participate in the selection process, please email peopleandplaces@zendesk.com with your specific accommodation request.
#J-18808-Ljbffr
Machine Learning Engineer II (GenAI & LLM Infrastructure) Location: San Francisco (Hybrid) Zendesk’s people have one goal in mind: to make Customer Experience better. Our products help more than 125,000 global brands (AirBnb, Uber, JetBrains, Slack, among others) make their billions of customers happy, every day. The AI/ML Platform team is at the forefront of this mission. We build the foundation that powers every AI-driven experience at Zendesk, enabling product teams to build, evaluate, and deploy state-of-the-art Large Language Model (LLM) applications reliably and at scale. We’re looking for an ML Engineer II to help design, implement, and improve core components of Zendesk’s GenAI infrastructure — from LLM Proxy and evaluation frameworks to orchestration tools for AI agents. You’ll work alongside senior engineers and technical leads, contributing to systems that are safe, scalable, and developer-friendly. This is a hands-on engineering role with significant growth potential, ideal for someone with solid ML engineering skills who wants to deepen expertise in LLM systems and GenAI infrastructure. What you get to do every day
Contribute to the development of Zendesk’s LLM Proxy, enabling secure, cost-optimized access to multiple foundation models. Help develop and maintain benchmarking and A/B testing frameworks for measuring LLM performance, latency, and cost. Assist in building orchestration systems that enable multi-step, tool-using AI agents. Work closely with applied ML, product, and platform teams to ensure infrastructure meets product needs. Deliver well-tested, maintainable, and performant code ready for production deployment. What you bring to the role
2–4 years of hands-on experience developing and deploying ML systems or infrastructure. Familiarity with LLM applications, vector databases, or ML/AI infrastructure components. Exposure to AWS, GCP, or Azure; understanding of Kubernetes, Docker, or similar containerized environments. Proficiency in Python and familiarity with at least one other backend language (Java, Scala, Golang, or Ruby). Understanding of CI/CD workflows, testing frameworks, and software design principles. Ability to work effectively in cross-functional teams and take feedback constructively. Preferred Qualifications
Experience building small-scale ML services or APIs. Exposure to ML observability tools and evaluation frameworks. Understanding of prompt engineering or fine-tuning workflows for LLMs. What our tech stack looks like
Our code is written in Python. Our servers live in AWS. LLM Vendors: OpenAI, Anthropic, Google, Llama Infra: Kubernetes, Docker, Kafka, AWS What we offer
Full ownership of the projects you work on. What you will be doing will have a huge impact. Team of passionate people who love what they do. Exciting projects, ability to implement your own ideas and improvements. Opportunity to learn and grow. US annualized base salary range is $110,000.00-$166,000.00. This role may be eligible for bonus, benefits, or related incentives. Offer level depends on job-related capabilities, experience, and other factors such as work location. Compensation details reflect base salary only and do not include bonus or incentives. Hybrid: Our hybrid experience is designed to provide a rich onsite experience while allowing remote work part of the week. The specific in-office schedule is determined by the hiring manager. The intelligent heart of customer experience
Zendesk software was built to bring a sense of calm to the chaotic world of customer service. Today we power billions of conversations with brands you know and love. Zendesk believes in offering our people a fulfilling and inclusive experience. Our hybrid way of working enables us to come together in person at Zendesk offices around the world, while also giving flexibility to work remotely for part of the week. Zendesk is an equal opportunity employer. We foster global diversity, equity, & inclusion in the workplace. Individuals seeking employment and employees are considered without regard to race, color, religion, national origin, age, sex, gender, gender identity, gender expression, sexual orientation, marital status, medical condition, ancestry, disability, military or veteran status, or any other characteristic protected by applicable law. We are an AA/EEO/Veterans/Disabled employer. If you are based in the United States and would like more information about your EEO rights under the law, please contact us. Zendesk endeavors to make reasonable accommodations for applicants with disabilities and disabled veterans pursuant to applicable federal and state law. If you require a reasonable accommodation to submit this application or participate in the selection process, please email peopleandplaces@zendesk.com with your specific accommodation request.
#J-18808-Ljbffr