DataRobot
Job Description:
DataRobot delivers AI that maximizes impact and minimizes business risk. Our platform and applications integrate into core business processes so teams can develop, deliver, and govern AI at scale. DataRobot empowers practitioners to deliver predictive and generative AI, and enables leaders to secure their AI assets. Organizations worldwide rely on DataRobot for AI that makes sense for their business — today and in the future. Our AI Compute team
is the engine at the heart of DataRobot. We have a mission to build and operate the foundational computing backbone that powers all of DataRobot's AI products and our customers' most demanding workloads. This team works backwards from the needs of data scientists, ML engineers, and application developers to provide the raw power and sophisticated orchestration required to train, deploy, and manage agentic AI at any scale. We are the internal equivalent of a hyperscale cloud provider's core compute service, obsessed with performance, efficiency, and enabling the future of AI. Senior Backend Engineer – AI Compute
role responsibilities include developing and supporting new compute primitives, designing and supporting our APIs, and instrumenting DataRobot to integrate with enterprise IT infrastructure to run Agentic workloads. Our team leverages modern technologies to achieve our goals and innovate on our solutions. This role includes participation in an on-call rotation—we believe in shared ownership of our platform and aim to build systems that are resilient, observable, and require minimal intervention. Key Responsibilities Develop, test, and support features of DataRobot. Create and maintain automated unit tests and functional tests. Design infrastructure for new features with the input of peers. Build a system that ensures micro-services are secure, performant, reliable, and can go from idea to production in hours. Build a system that continuously provides recommendations to right-size computing resources for Kubernetes to ensure efficient cloud spending for ourselves and our customers. Design and architect automated quality platforms to go from Enterprise-Grade releases from once-a-quarter to once-a-week to once-per-day to once-per-hour without sacrificing performance, security, or reliability. Work with Product, Legal and Security to ensure the continuous delivery processes you build are compliant and secure. Work with the team to ensure pipelines have clear playbooks and can operate 24/7 without you. Work with a diverse group of architects and platform engineers across our R&D department to set continuous delivery and performance requirements for all production services. Work with internal product managers to set roadmaps and define milestones to deliver innovative and simple solutions to our teams’ continuous delivery and platform engineering issues. Manage individual projects and milestones with abundant communication of progress. Knowledge, Skills and Abilities Expert proficiency in Kubernetes architecture and operations including resource management/scheduling, auto-scaling, Gateway API/Ingress, Prometheus, and OpenTelemetry or experience with other orchestrators like nomad/slurm. Experience with GPU clusters, either as a user or administrator or experience in multi-node AI/ML. Passionate about developing products for internal developers. Strong computer science fundamentals in object-oriented design, data structures, algorithm design, problem-solving, and complexity analysis. Understanding of design for scalability, performance, and reliability. Deep experience with automated testing and test-driven development. Demonstrable knowledge of software architecture for large systems. Experience decoupling monolithic software into smaller reusable components. Self-motivated and proactive, able to take ownership and deliver results. Ability and willingness to learn new technologies. Personal drive to get things finished. Effective communication. Operational excellence to continuously define and improve SLA (Service Level Agreement) working backward from customer experience for all software components this team manages. Requisite Education and Experience / Minimum Qualifications 5+ years of experience Expert in developing a wide variety of software with Python (4+ years) Experience designing and operating diverse CI/CD pipelines with Harness.io Experience designing and innovating large-scale horizontal and vertically-scaled build, testing, and deployment systems for Kubernetes environments and familiarity with Helm charts Preferred: Golang, Terraform and Terragrunt; Chronosphere; Multi-cloud experience (AWS, Azure, GCP, and OpenShift) Nice to Have Direct experience with modern distributed compute frameworks (e.g., Ray, Dask) and large-scale job schedulers (e.g., Slurm, Kueue). CKAD (Certified Kubernetes Application Developer) certification Publicly reviewable contributions to interesting development projects. Agentic AI experience Experience working with NVIDIA infrastructure in managing (NIM Operator, NVIDIA Dynamo Operator). The talent and dedication of our employees are at the core of DataRobot’s journey to be an iconic company. We strive to attract and retain the best talent by providing competitive pay and benefits with our employees’ well-being at the core. Here’s what your benefits package may include depending on your location and local legal requirements: Medical, Dental & Vision Insurance, Flexible Time Off Program, Paid Holidays, Paid Parental Leave, Global Employee Assistance Program (EAP) and more! DataRobot Operating Principles Wow Our Customers Set High Standards Be Better Than Yesterday Be Rigorous Assume Positive Intent Have the Tough Conversations Be Better Together Debate, Decide, Commit Deliver Results Overcommunicate DataRobot is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, gender, sexual orientation, gender identity, age, protected veteran status, disability, or other legally protected characteristics. DataRobot is committed to providing reasonable accommodations to applicants with physical and mental disabilities. All applicant data submitted is handled in accordance with our Applicant Privacy Policy. All applicant data submitted is handled in accordance with our Applicant Privacy Policy.
#J-18808-Ljbffr
DataRobot delivers AI that maximizes impact and minimizes business risk. Our platform and applications integrate into core business processes so teams can develop, deliver, and govern AI at scale. DataRobot empowers practitioners to deliver predictive and generative AI, and enables leaders to secure their AI assets. Organizations worldwide rely on DataRobot for AI that makes sense for their business — today and in the future. Our AI Compute team
is the engine at the heart of DataRobot. We have a mission to build and operate the foundational computing backbone that powers all of DataRobot's AI products and our customers' most demanding workloads. This team works backwards from the needs of data scientists, ML engineers, and application developers to provide the raw power and sophisticated orchestration required to train, deploy, and manage agentic AI at any scale. We are the internal equivalent of a hyperscale cloud provider's core compute service, obsessed with performance, efficiency, and enabling the future of AI. Senior Backend Engineer – AI Compute
role responsibilities include developing and supporting new compute primitives, designing and supporting our APIs, and instrumenting DataRobot to integrate with enterprise IT infrastructure to run Agentic workloads. Our team leverages modern technologies to achieve our goals and innovate on our solutions. This role includes participation in an on-call rotation—we believe in shared ownership of our platform and aim to build systems that are resilient, observable, and require minimal intervention. Key Responsibilities Develop, test, and support features of DataRobot. Create and maintain automated unit tests and functional tests. Design infrastructure for new features with the input of peers. Build a system that ensures micro-services are secure, performant, reliable, and can go from idea to production in hours. Build a system that continuously provides recommendations to right-size computing resources for Kubernetes to ensure efficient cloud spending for ourselves and our customers. Design and architect automated quality platforms to go from Enterprise-Grade releases from once-a-quarter to once-a-week to once-per-day to once-per-hour without sacrificing performance, security, or reliability. Work with Product, Legal and Security to ensure the continuous delivery processes you build are compliant and secure. Work with the team to ensure pipelines have clear playbooks and can operate 24/7 without you. Work with a diverse group of architects and platform engineers across our R&D department to set continuous delivery and performance requirements for all production services. Work with internal product managers to set roadmaps and define milestones to deliver innovative and simple solutions to our teams’ continuous delivery and platform engineering issues. Manage individual projects and milestones with abundant communication of progress. Knowledge, Skills and Abilities Expert proficiency in Kubernetes architecture and operations including resource management/scheduling, auto-scaling, Gateway API/Ingress, Prometheus, and OpenTelemetry or experience with other orchestrators like nomad/slurm. Experience with GPU clusters, either as a user or administrator or experience in multi-node AI/ML. Passionate about developing products for internal developers. Strong computer science fundamentals in object-oriented design, data structures, algorithm design, problem-solving, and complexity analysis. Understanding of design for scalability, performance, and reliability. Deep experience with automated testing and test-driven development. Demonstrable knowledge of software architecture for large systems. Experience decoupling monolithic software into smaller reusable components. Self-motivated and proactive, able to take ownership and deliver results. Ability and willingness to learn new technologies. Personal drive to get things finished. Effective communication. Operational excellence to continuously define and improve SLA (Service Level Agreement) working backward from customer experience for all software components this team manages. Requisite Education and Experience / Minimum Qualifications 5+ years of experience Expert in developing a wide variety of software with Python (4+ years) Experience designing and operating diverse CI/CD pipelines with Harness.io Experience designing and innovating large-scale horizontal and vertically-scaled build, testing, and deployment systems for Kubernetes environments and familiarity with Helm charts Preferred: Golang, Terraform and Terragrunt; Chronosphere; Multi-cloud experience (AWS, Azure, GCP, and OpenShift) Nice to Have Direct experience with modern distributed compute frameworks (e.g., Ray, Dask) and large-scale job schedulers (e.g., Slurm, Kueue). CKAD (Certified Kubernetes Application Developer) certification Publicly reviewable contributions to interesting development projects. Agentic AI experience Experience working with NVIDIA infrastructure in managing (NIM Operator, NVIDIA Dynamo Operator). The talent and dedication of our employees are at the core of DataRobot’s journey to be an iconic company. We strive to attract and retain the best talent by providing competitive pay and benefits with our employees’ well-being at the core. Here’s what your benefits package may include depending on your location and local legal requirements: Medical, Dental & Vision Insurance, Flexible Time Off Program, Paid Holidays, Paid Parental Leave, Global Employee Assistance Program (EAP) and more! DataRobot Operating Principles Wow Our Customers Set High Standards Be Better Than Yesterday Be Rigorous Assume Positive Intent Have the Tough Conversations Be Better Together Debate, Decide, Commit Deliver Results Overcommunicate DataRobot is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, gender, sexual orientation, gender identity, age, protected veteran status, disability, or other legally protected characteristics. DataRobot is committed to providing reasonable accommodations to applicants with physical and mental disabilities. All applicant data submitted is handled in accordance with our Applicant Privacy Policy. All applicant data submitted is handled in accordance with our Applicant Privacy Policy.
#J-18808-Ljbffr