Institute of Foundation Models
Senior Infrastructure Engineer - Supercomputing
Institute of Foundation Models, Sunnyvale, California, United States, 94087
About the Institute of Foundation Models
We are a dedicated research lab for building, understanding, using, and risk-managing foundation models. Our mandate is to advance research, nurture the next generation of AI builders, and drive transformative contributions to a knowledge-driven economy. The Role
We are operating some of the world’s largest GPU supercomputing clusters to support cutting‑edge AI research and large‑scale model deployment. We’re looking for an
Infrastructure Engineer
to join our core platform team to help build, operate, and scale our hybrid infrastructure across both on‑prem and cloud environments. This role is ideal for engineers who thrive at the intersection of distributed systems, cloud automation, and high-performance computing. As part of our team, you’ll have the opportunity to work on the core of cutting‑edge foundation model training, alongside world‑class researchers, data scientists, and engineers, tackling the most fundamental and impactful challenges in AI development.
You will participate in the development of groundbreaking AI solutions that have the potential to reshape entire industries.Strategic and innovative problem‑solving skills will be instrumental in establishing MBZUAI as a global hub forhigh-performance computing in deep learning, driving impactful discoveries that inspire the next generation of AIpioneers. Key Responsibilities
Operate and scale high-performance GPU clusters used for AI training and production inference. Manage infrastructure across
on‑premise (Slurm‑based)
HPC environments and
cloud providers
like
AWS
and
Azure . Implement and maintain Infrastructure as Code using
Pulumi ,
Terraform , or
Ansible . Enhance and secure deployment pipelines using
Kubernetes ,
Flux , and
ArgoCD . Help define and enforce security best practices for internal researchers and production services. Continuously improve observability, resiliency, and operational tooling across environments. Tech Stack
Kubernetes, Slurm Pulumi, Terraform, Ansible Rust and Go Flux, ArgoCD AWS, Azure Professional Experience
Strong experience managing compute infrastructure in hybrid environments (on‑prem and cloud). Hands‑on experience operating
Slurm
clusters at scale. Proficiency in deploying and managing containerized applications, ideally written in
Rust
or
Go . Solid background in IaC and CI/CD best practices. Experience working with GPU workloads or HPC infrastructure is a strong plus. Familiarity with securing and monitoring multi‑tenant compute environments. $200,000 - $400,000 a year Salary depends on level. Visa Sponsorship
This position is eligible for visa sponsorship. Benefits Include
Comprehensive medical, dental, and vision benefits Bonus 401K Plan Generous paid time off, sick leave and holidays Paid Parental Leave Employee Assistance Program Life insurance and disability
#J-18808-Ljbffr
We are a dedicated research lab for building, understanding, using, and risk-managing foundation models. Our mandate is to advance research, nurture the next generation of AI builders, and drive transformative contributions to a knowledge-driven economy. The Role
We are operating some of the world’s largest GPU supercomputing clusters to support cutting‑edge AI research and large‑scale model deployment. We’re looking for an
Infrastructure Engineer
to join our core platform team to help build, operate, and scale our hybrid infrastructure across both on‑prem and cloud environments. This role is ideal for engineers who thrive at the intersection of distributed systems, cloud automation, and high-performance computing. As part of our team, you’ll have the opportunity to work on the core of cutting‑edge foundation model training, alongside world‑class researchers, data scientists, and engineers, tackling the most fundamental and impactful challenges in AI development.
You will participate in the development of groundbreaking AI solutions that have the potential to reshape entire industries.Strategic and innovative problem‑solving skills will be instrumental in establishing MBZUAI as a global hub forhigh-performance computing in deep learning, driving impactful discoveries that inspire the next generation of AIpioneers. Key Responsibilities
Operate and scale high-performance GPU clusters used for AI training and production inference. Manage infrastructure across
on‑premise (Slurm‑based)
HPC environments and
cloud providers
like
AWS
and
Azure . Implement and maintain Infrastructure as Code using
Pulumi ,
Terraform , or
Ansible . Enhance and secure deployment pipelines using
Kubernetes ,
Flux , and
ArgoCD . Help define and enforce security best practices for internal researchers and production services. Continuously improve observability, resiliency, and operational tooling across environments. Tech Stack
Kubernetes, Slurm Pulumi, Terraform, Ansible Rust and Go Flux, ArgoCD AWS, Azure Professional Experience
Strong experience managing compute infrastructure in hybrid environments (on‑prem and cloud). Hands‑on experience operating
Slurm
clusters at scale. Proficiency in deploying and managing containerized applications, ideally written in
Rust
or
Go . Solid background in IaC and CI/CD best practices. Experience working with GPU workloads or HPC infrastructure is a strong plus. Familiarity with securing and monitoring multi‑tenant compute environments. $200,000 - $400,000 a year Salary depends on level. Visa Sponsorship
This position is eligible for visa sponsorship. Benefits Include
Comprehensive medical, dental, and vision benefits Bonus 401K Plan Generous paid time off, sick leave and holidays Paid Parental Leave Employee Assistance Program Life insurance and disability
#J-18808-Ljbffr