Mirage
Member of Technical Staff, GPU Optimization
Join to apply for the
Member of Technical Staff, GPU Optimization
role at
Mirage
This role is part of Mirage, the leading AI short-form video company. We are building full-stack foundation models and products that redefine video creation, production, and editing. Over 20 million creators and businesses use Mirage’s products to reach their full creative and commercial potential.
Base pay range $200,000.00/yr - $350,000.00/yr
Our Products
Captions
Mirage Studio
Our Technology
AI Research @ Mirage
Mirage Model Announcement
Seeing Voices (white-paper)
Press Coverage
TechCrunch
Lenny’s Podcast
Forbes AI 50
Fast Company
Our Investors
Index Ventures
Kleiner Perkins
Sequoia Capital
Andreessen Horowitz
Uncommon Projects
Kevin Systrom
Mike Krieger
Lenny Rachitsky
Antoine Martin
Julie Zhuo
Ben Rubin
Jaren Glover
SVAngel
20VC
Ludlow Ventures
Chapter One
Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square).
We do not work with third-party recruiting agencies, please do not contact us.
About The Role As an expert in making AI models run fast—really fast—you live at the intersection of CUDA, PyTorch, and generative models, and get excited by the idea of squeezing every last bit of performance out of modern GPUs. You will have the opportunity to turn our cutting-edge video generation research into scalable, production-grade systems. From designing custom CUDA or Triton kernels to profiling distributed inference pipelines, you'll work across the full stack to make sure our models train and serve at peak performance.
Key Responsibilities
Optimize model training and inference pipelines, including data loading, preprocessing, checkpointing, and deployment, for throughput, latency, and memory efficiency on NVIDIA GPUs.
Design, implement, and benchmark custom CUDA and Triton kernels for performance-critical operations.
Integrate low-level optimizations into PyTorch-based codebases, including custom ops, low-precision formats, and TorchInductor passes.
Profile and debug the entire stack—from kernel launches to multi‑GPU I/O paths—using Nsight, nvprof, PyTorch Profiler, and custom tools.
Work closely with colleagues to co‑design model architectures and data pipelines that are hardware‑friendly and maintain state‑of‑the‑art quality.
Stay on the cutting edge of GPU and compiler tech (e.g., Hopper features, CUDA Graphs, Triton, FlashAttention, and more) and evaluate their impact.
Collaborate with infrastructure and backend experts to improve cluster orchestration, scaling strategies, and observability for large experiments.
Provide clear, data-driven insights and trade‑offs between performance, quality, and cost.
Contribute to a culture of fast iteration, thoughtful profiling, and performance-centric design.
Required Qualifications
Bachelor's degree in Computer Science, Electrical/Computer Engineering, or equivalent practical experience.
3+ years of hands‑on experience writing and optimizing CUDA kernels for production ML workloads.
Deep understanding of GPU architecture: memory hierarchies, warp scheduling, tensor cores, register pressure, and occupancy tuning.
Strong Python skills and familiarity with PyTorch internals, TorchScript, and distributed data‑parallel training.
Proven track record profiling and accelerating large‑scale training and inference jobs (e.g., mixed precision, kernel fusion, custom collectives).
Comfort working in Linux environments with modern CI/CD, containerization, and cluster managers such as Kubernetes.
Preferred Qualifications
Advanced degree (MS/PhD) in Computer Science, Electrical/Computer Engineering, or related field.
Experience with multi‑modal AI systems, particularly video generation or computer vision models.
Familiarity with distributed training frameworks (DeepSpeed, FairScale, Megatron) and model parallelism techniques.
Knowledge of compiler optimization techniques and experience with MLIR, XLA, or similar frameworks.
Experience with cloud infrastructure (AWS, GCP, Azure) and GPU cluster management.
Ability to translate research goals into performant code, balancing numerical fidelity with hardware constraints.
Strong communication skills and experience mentoring junior engineers.
Benefits
Comprehensive medical, dental, and vision plans.
401K with employer match.
Commuter Benefits.
Catered lunch multiple days per week.
Dinner stipend every night if you're working late and want a bite!
Grubhub subscription.
Health & Wellness Perks (Talkspace, Kindbody, One Medical subscription, HealthAdvocate, Teladoc).
Multiple team offsites per year with team events every month.
Generous PTO policy.
Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Please note benefits apply to full time employees only.
Compensation Range: $200K - $350K
#J-18808-Ljbffr
Member of Technical Staff, GPU Optimization
role at
Mirage
This role is part of Mirage, the leading AI short-form video company. We are building full-stack foundation models and products that redefine video creation, production, and editing. Over 20 million creators and businesses use Mirage’s products to reach their full creative and commercial potential.
Base pay range $200,000.00/yr - $350,000.00/yr
Our Products
Captions
Mirage Studio
Our Technology
AI Research @ Mirage
Mirage Model Announcement
Seeing Voices (white-paper)
Press Coverage
TechCrunch
Lenny’s Podcast
Forbes AI 50
Fast Company
Our Investors
Index Ventures
Kleiner Perkins
Sequoia Capital
Andreessen Horowitz
Uncommon Projects
Kevin Systrom
Mike Krieger
Lenny Rachitsky
Antoine Martin
Julie Zhuo
Ben Rubin
Jaren Glover
SVAngel
20VC
Ludlow Ventures
Chapter One
Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square).
We do not work with third-party recruiting agencies, please do not contact us.
About The Role As an expert in making AI models run fast—really fast—you live at the intersection of CUDA, PyTorch, and generative models, and get excited by the idea of squeezing every last bit of performance out of modern GPUs. You will have the opportunity to turn our cutting-edge video generation research into scalable, production-grade systems. From designing custom CUDA or Triton kernels to profiling distributed inference pipelines, you'll work across the full stack to make sure our models train and serve at peak performance.
Key Responsibilities
Optimize model training and inference pipelines, including data loading, preprocessing, checkpointing, and deployment, for throughput, latency, and memory efficiency on NVIDIA GPUs.
Design, implement, and benchmark custom CUDA and Triton kernels for performance-critical operations.
Integrate low-level optimizations into PyTorch-based codebases, including custom ops, low-precision formats, and TorchInductor passes.
Profile and debug the entire stack—from kernel launches to multi‑GPU I/O paths—using Nsight, nvprof, PyTorch Profiler, and custom tools.
Work closely with colleagues to co‑design model architectures and data pipelines that are hardware‑friendly and maintain state‑of‑the‑art quality.
Stay on the cutting edge of GPU and compiler tech (e.g., Hopper features, CUDA Graphs, Triton, FlashAttention, and more) and evaluate their impact.
Collaborate with infrastructure and backend experts to improve cluster orchestration, scaling strategies, and observability for large experiments.
Provide clear, data-driven insights and trade‑offs between performance, quality, and cost.
Contribute to a culture of fast iteration, thoughtful profiling, and performance-centric design.
Required Qualifications
Bachelor's degree in Computer Science, Electrical/Computer Engineering, or equivalent practical experience.
3+ years of hands‑on experience writing and optimizing CUDA kernels for production ML workloads.
Deep understanding of GPU architecture: memory hierarchies, warp scheduling, tensor cores, register pressure, and occupancy tuning.
Strong Python skills and familiarity with PyTorch internals, TorchScript, and distributed data‑parallel training.
Proven track record profiling and accelerating large‑scale training and inference jobs (e.g., mixed precision, kernel fusion, custom collectives).
Comfort working in Linux environments with modern CI/CD, containerization, and cluster managers such as Kubernetes.
Preferred Qualifications
Advanced degree (MS/PhD) in Computer Science, Electrical/Computer Engineering, or related field.
Experience with multi‑modal AI systems, particularly video generation or computer vision models.
Familiarity with distributed training frameworks (DeepSpeed, FairScale, Megatron) and model parallelism techniques.
Knowledge of compiler optimization techniques and experience with MLIR, XLA, or similar frameworks.
Experience with cloud infrastructure (AWS, GCP, Azure) and GPU cluster management.
Ability to translate research goals into performant code, balancing numerical fidelity with hardware constraints.
Strong communication skills and experience mentoring junior engineers.
Benefits
Comprehensive medical, dental, and vision plans.
401K with employer match.
Commuter Benefits.
Catered lunch multiple days per week.
Dinner stipend every night if you're working late and want a bite!
Grubhub subscription.
Health & Wellness Perks (Talkspace, Kindbody, One Medical subscription, HealthAdvocate, Teladoc).
Multiple team offsites per year with team events every month.
Generous PTO policy.
Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Please note benefits apply to full time employees only.
Compensation Range: $200K - $350K
#J-18808-Ljbffr