Cantina
A bit about Cantina:
Cantina, founded by Sean Parker, is a new social platform with the most advanced AI character creator. Build, share, and interact with AI bots and your friends directly in the Cantina or across the internet.
Cantina bots are lifelike, social creatures, capable of interacting wherever humans go on the internet. Recreate yourself using powerful AI, imagine someone new, or choose from thousands of existing characters. Bots are a new media type that offer a way for creators to share infinitely scalable and personalized content experiences combined with seamless group chat across voice, video, and text.
If you're excited about the potential AI has to shape human creativity and social interactions, join us in building the future!
A bit about the role:
We're looking for an Inference Engineer who specializes in productionizing and hosting video AI models at scale. You'll be responsible for taking cutting-edge neural networks from research to production, building robust inference infrastructure, and optimizing model performance for real-time applications. This role focuses on the deployment and serving of large video models.
As an Inference Engineer, you will: Deploy video AI models to production
- Take research models and build production-ready inference endpoints with APIs, ensuring efficient operation across cloud infrastructure. Maintain and optimize inference systems
- Debug complex model serving issues, optimize latency performance, monitor system health, and ensure 99.9% uptime for AI-powered features. Implement model optimizations
- Work with neural network architectures including diffusion networks, VAEs, and transformers. Apply streaming optimizations and understand video model architectures to implement effective performance improvements. Manage inference infrastructure
- Leverage containerization with Docker, cloud storage solutions like S3, and cluster computing to build scalable model serving infrastructure. Collaborate with research teams
- Work closely with AI researchers to understand model requirements, architectural constraints, and optimization opportunities for new video generation models. A bit about you:
2+ years of ML engineering experience with focus on model inference and deployment Strong understanding of neural network architectures , particularly diffusion networks, VAEs, and transformer models Experience with video and image models
- Understanding of how video/image generation models work, their architectures, and optimization strategies specific to video processing Multi-GPU inference expertise
- Experience running model components across multiple GPUs, implementing parallel processing strategies for large models Production model hosting experience
- Track record of deploying and maintaining ML models in production environments, including streaming and real-time inference Experience with containerization (Docker), AWS, and cluster computing environments Familiarity with machine learning frameworks (PyTorch, TensorFlow) Experience with inference platforms and model serving solutions Technical Stack You'll Work With:
Cloud : AWS (S3, DynamoDB), Kubernetes clusters ML Infrastructure : Model serving platforms, Docker Languages : Python Frameworks : PyTorch, TensorFlow Models : Video generation models, diffusion networks, VAEs, transformers Optimization : Multi-GPU inference, real-time processing techniques
Pay Equity:
In compliance with Pay Transparency Laws, the base salary range for this role is between
$175,000-$225,000
for those located in the San Francisco Bay Area, New York City and Seattle, WA. When determining compensation, a number of factors will be considered, including skills, experience, job scope, location, and competitive compensation market data.
Benefits:
Health Care - 99% of premiums for medical, vision, dental are fully paid for by Cantina, plus One Medical membership. Monthly Wellness Stipend - $500/month to use on whatever you'd like! Rest and Recharge - 15 PTO days per year, 10 sick days, all Federal holidays, and 2 floating holidays. 401(K) - Eligible to participate on day one of employment. Parental Leave & Fertility Support Competitive Salary & Equity Lunch and snacks provided for in-office employees. WFH equipment provided for full-time hybrid/remote employees.
Cantina, founded by Sean Parker, is a new social platform with the most advanced AI character creator. Build, share, and interact with AI bots and your friends directly in the Cantina or across the internet.
Cantina bots are lifelike, social creatures, capable of interacting wherever humans go on the internet. Recreate yourself using powerful AI, imagine someone new, or choose from thousands of existing characters. Bots are a new media type that offer a way for creators to share infinitely scalable and personalized content experiences combined with seamless group chat across voice, video, and text.
If you're excited about the potential AI has to shape human creativity and social interactions, join us in building the future!
A bit about the role:
We're looking for an Inference Engineer who specializes in productionizing and hosting video AI models at scale. You'll be responsible for taking cutting-edge neural networks from research to production, building robust inference infrastructure, and optimizing model performance for real-time applications. This role focuses on the deployment and serving of large video models.
As an Inference Engineer, you will: Deploy video AI models to production
- Take research models and build production-ready inference endpoints with APIs, ensuring efficient operation across cloud infrastructure. Maintain and optimize inference systems
- Debug complex model serving issues, optimize latency performance, monitor system health, and ensure 99.9% uptime for AI-powered features. Implement model optimizations
- Work with neural network architectures including diffusion networks, VAEs, and transformers. Apply streaming optimizations and understand video model architectures to implement effective performance improvements. Manage inference infrastructure
- Leverage containerization with Docker, cloud storage solutions like S3, and cluster computing to build scalable model serving infrastructure. Collaborate with research teams
- Work closely with AI researchers to understand model requirements, architectural constraints, and optimization opportunities for new video generation models. A bit about you:
2+ years of ML engineering experience with focus on model inference and deployment Strong understanding of neural network architectures , particularly diffusion networks, VAEs, and transformer models Experience with video and image models
- Understanding of how video/image generation models work, their architectures, and optimization strategies specific to video processing Multi-GPU inference expertise
- Experience running model components across multiple GPUs, implementing parallel processing strategies for large models Production model hosting experience
- Track record of deploying and maintaining ML models in production environments, including streaming and real-time inference Experience with containerization (Docker), AWS, and cluster computing environments Familiarity with machine learning frameworks (PyTorch, TensorFlow) Experience with inference platforms and model serving solutions Technical Stack You'll Work With:
Cloud : AWS (S3, DynamoDB), Kubernetes clusters ML Infrastructure : Model serving platforms, Docker Languages : Python Frameworks : PyTorch, TensorFlow Models : Video generation models, diffusion networks, VAEs, transformers Optimization : Multi-GPU inference, real-time processing techniques
Pay Equity:
In compliance with Pay Transparency Laws, the base salary range for this role is between
$175,000-$225,000
for those located in the San Francisco Bay Area, New York City and Seattle, WA. When determining compensation, a number of factors will be considered, including skills, experience, job scope, location, and competitive compensation market data.
Benefits:
Health Care - 99% of premiums for medical, vision, dental are fully paid for by Cantina, plus One Medical membership. Monthly Wellness Stipend - $500/month to use on whatever you'd like! Rest and Recharge - 15 PTO days per year, 10 sick days, all Federal holidays, and 2 floating holidays. 401(K) - Eligible to participate on day one of employment. Parental Leave & Fertility Support Competitive Salary & Equity Lunch and snacks provided for in-office employees. WFH equipment provided for full-time hybrid/remote employees.