Logo
The Rundown AI, Inc.

Staff Security Engineer, Container & VM Security

The Rundown AI, Inc., San Francisco, California, United States, 94199

Save Job

About the Role At Anthropic, we're building frontier AI systems that require unprecedented levels of security and isolation. We're seeking a Staff Security Engineer specializing in container and VM security to help us design and implement robust sandboxing solutions that protect our AI infrastructure from untrusted workloads while maintaining performance and usability.

In this role, you'll be at the forefront of securing our compute infrastructure, working with cutting-edge virtualization and containerization technologies. You'll architect secure-by-default systems that leverage Linux kernel isolation mechanisms, design threat models for complex distributed systems, and build defenses that can withstand sophisticated attacks. Your work will be critical in ensuring that our systems remain secure as we scale to support increasingly powerful models and diverse use cases.

Responsibilities

Design and implement secure sandboxing architectures using virtualization (KVM, Xen, Firecracker, Cloud Hypervisor) and container technologies (OCI containers, gVisor, Kata Containers) to isolate untrusted workloads

Develop deep expertise in Linux kernel isolation mechanisms including namespaces, cgroups, seccomp, capabilities, and LSMs (SELinux/AppArmor) to build defense-in-depth strategies

Create comprehensive threat models for our sandboxing infrastructure, identifying attack vectors and designing mitigations for container escapes, VM breakouts, and side-channel attacks

Build and maintain security policies and configurations for multi-tenant cloud environments, ensuring strong isolation between different workloads

Partner with infrastructure teams to implement secure-by-default patterns for deploying and managing containerized and virtualized workloads at scale

Develop monitoring and detection capabilities to identify potential security breaches or anomalous behavior within our sandboxed environments

Lead security reviews of new sandboxing technologies and provide guidance on their adoption within our infrastructure

Mentor other engineers on secure coding practices and sandboxing best practices

Contribute to our security incident response efforts, particularly for isolation-related security events

Collaborate with research teams to understand the unique security requirements of AI workloads and develop appropriate isolation strategies

You may be a good fit if you:

Have 8+ years of experience in systems security, with deep expertise in virtualization and containerization security

Possess expert-level knowledge of Linux kernel isolation mechanisms and have experience implementing them in production environments

Have a proven track record of securing untrusted workloads in cloud settings, including both public cloud and private infrastructure

Are proficient in multiple programming languages (e.g., Go, Rust, C/C++, Python) with experience in systems programming

Have hands-on experience with container runtimes (Docker, containerd, CRI-O) and orchestration platforms (Kubernetes)

Understand hypervisor internals and have experience with VM security (QEMU/KVM, Xen, VMware, Hyper-V)

Can design and articulate complex threat models for distributed systems

Have experience with cloud provider security models and their isolation guarantees

Thrive in ambiguous environments and can balance security requirements with performance and usability needs

Communicate effectively with both technical and non-technical stakeholders about security risks and mitigations

Strong candidates may also have:

Experience with microVM technologies (Firecracker, Cloud Hypervisor) and their security properties

Knowledge of hardware-based security features (Intel TDX, AMD SEV, SGX) and their application to confidential computing

Contributions to open-source security projects related to containerization or virtualization

Experience with eBPF for security monitoring and enforcement

Understanding of AI/ML workload characteristics and their unique security requirements

Track record of identifying and responsibly disclosing security vulnerabilities in virtualization or container platforms

Experience building security tooling and automation for large-scale infrastructure

Background in formal verification or security research

Representative projects:

Designing a multi-layered sandboxing architecture that combines VMs and containers to safely execute untrusted AI-generated code

Implementing runtime security policies using seccomp, AppArmor, and SELinux to minimize container attack surface

Building a threat detection system that identifies potential container escape attempts using eBPF and kernel audit logs

Creating secure defaults and guardrails for Kubernetes deployments to prevent privilege escalation and lateral movement

Developing automated security testing for our sandboxing infrastructure to continuously validate isolation properties

Architecting network isolation strategies using CNI plugins and cloud-native firewalling to segment workloads

Deadline to apply:

None. Applications will be reviewed on a rolling basis.

#J-18808-Ljbffr