Logo
Captions

Member of Technical Staff, GPU Optimization

Captions, New York, New York, us, 10261

Save Job

Captions is the leading AI video company—our mission is to empower anyone, anywhere to tell their stories through video. Over 10 million creators and businesses have used Captions to simplify video creation with truly novel and groundbreaking AI capabilities. We are a rapidly growing team of ambitious, experienced, and devoted engineers, researchers, designers, marketers, and operators based in NYC. As an early member of our team, you’ll have an opportunity to have an outsized impact on our products and our company's culture. Our Technology Mirage Announcement

our proprietary omni-modal foundation model Seeing Voices (technical paper)

generating A-roll video from audio with Mirage Mirage Studio

for generating expressive videos at scale "Captions: For Talking Videos”

available in the iOS app store Press Coverage Lenny’s Podcast:

Interview with Gaurav Misra (CEO) Latest Fundraise:

Series C Announcement The Information:

50 Most Promising Startups Fast Company:

Next Big Things in Tec

h Business Insider:

34 most promising AI startups TIME:

The Best Inventions of 2024 Our Investors We’re very fortunate to have some the best investors and entrepreneurs backing us, including

Index Ventures ,

Kleiner Perkins ,

Sequoia Capital ,

Andreessen Horowitz , Uncommon Projects, Kevin Systrom, Mike Krieger, Lenny Rachitsky, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, and more. ** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square) We do not work with third-party recruiting agencies, please do not contact us** About the Role As an expert in making AI models run fast—really fast—you live at the intersection of CUDA, PyTorch, and generative models, and get excited by the idea of squeezing every last bit of performance out of modern GPUs. You will have the opportunity to turn our cutting-edge video generation research into scalable, production-grade systems. From designing custom CUDA or Triton kernels to profiling distributed inference pipelines, you'll work across the full stack to make sure our models train and serve at peak performance. Key Responsibilities Optimize model training and inference pipelines, including data loading, preprocessing, checkpointing, and deployment, for throughput, latency, and memory efficiency on NVIDIA GPUs

Design, implement, and benchmark custom CUDA and Triton kernels for performance-critical operations

Integrate low-level optimizations into PyTorch-based codebases, including custom ops, low-precision formats, and TorchInductor passes

Profile and debug the entire stack—from kernel launches to multi-GPU I/O paths—using Nsight, nvprof, PyTorch Profiler, and custom tools

Work closely with colleagues to co-design model architectures and data pipelines that are hardware-friendly and maintain state-of-the-art quality

Stay on the cutting edge of GPU and compiler tech (e.g., Hopper features, CUDA Graphs, Triton, FlashAttention, and more) and evaluate their impact

Collaborate with infrastructure and backend experts to improve cluster orchestration, scaling strategies, and observability for large experiments

Provide clear, data-driven insights and trade-offs between performance, quality, and cost

Contribute to a culture of fast iteration, thoughtful profiling, and performance-centric design

Required Qualifications Bachelor's degree in Computer Science, Electrical/Computer Engineering, or equivalent practical experience

3+ years of hands-on experience writing and optimizing CUDA kernels for production ML workloads

Deep understanding of GPU architecture: memory hierarchies, warp scheduling, tensor cores, register pressure, and occupancy tuning

Strong Python skills and familiarity with PyTorch internals, TorchScript, and distributed data-parallel training

Proven track record profiling and accelerating large-scale training and inference jobs (e.g., mixed precision, kernel fusion, custom collectives)

Comfort working in Linux environments with modern CI/CD, containerization, and cluster managers such as Kubernetes

Preferred Qualifications Advanced degree (MS/PhD) in Computer Science, Electrical/Computer Engineering, or related field

Experience with multi-modal AI systems, particularly video generation or computer vision models

Familiarity with distributed training frameworks (DeepSpeed, FairScale, Megatron) and model parallelism techniques

Knowledge of compiler optimization techniques and experience with MLIR, XLA, or similar frameworks

Experience with cloud infrastructure (AWS, GCP, Azure) and GPU cluster management

Ability to translate research goals into performant code, balancing numerical fidelity with hardware constraints

Strong communication skills and experience mentoring junior engineers

Benefits:

Comprehensive medical, dental, and vision plans

401K with employer match

Commuter Benefits

Catered lunch multiple days per week

Dinner stipend every night if you're working late and want a bite!

Doordash DashPass subscription

Health & Wellness Perks (Talkspace, Kindbody, One Medical subscription, HealthAdvocate, Teladoc)

Multiple team offsites per year with team events every month

Generous PTO policy

Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Please note benefits apply to full time employees only.

#J-18808-Ljbffr