Logo
Hedra, Inc

Research Scientist, Long Video Generation

Hedra, Inc, San Francisco, California, United States, 94199

Save Job

About Hedra Hedra is a pioneering generative media company backed by top investors at Index, A16Z, and Abstract Ventures. We're building Hedra Studio, a multimodal creation platform capable of control, emotion, and creative intelligence. At the core of Hedra Studio is our Character-3 foundation model, the first omnimodal model in production. Character-3 jointly reasons across image, text, and audio for more intelligent video generation — it’s the next evolution of AI-driven content creation. At Hedra, we’re a team of hard-working, passionate individuals seeking to fundamentally change content creation and build a generational company together. We value startup energy, initiative, and the ability to turn bold ideas into real products. Our team is fully in-person in SF/NY with a shared love for whiteboard problem-solving. Overview We are seeking a highly motivated Research Scientist to push the limits of long-form video generation, with a focus on auto-regressive modeling, causal attention mechanisms, and efficient sequence handling. The ideal candidate will have a deep understanding of temporal modeling in generative AI and experience building scalable architectures for multi-minute coherent video outputs. Responsibilities Design and implement long video generation architectures, with emphasis on auto-regressive generation, causal attention, and memory-efficient transformer designs.

Develop methods for maintaining temporal and semantic coherence over long time horizons.

Work closely with engineering to integrate research into production-grade pipelines.

Stay on top of recent advances in long-context transformers, sequence compression, and scalable video generation.

Present results internally and externally, including possible top-tier conference submissions.

Qualifications PhD or strong research/industry experience in Computer Science, Machine Learning, or related fields, with a focus on sequence modeling or generative models.

Deep understanding of transformer architectures, attention mechanisms, and auto-regressive modeling.

Experience with long-context processing and memory-efficient computation.

Proficiency in Python and PyTorch; ability to rapidly prototype and iterate on new architectures.

A record of impactful research or large-scale system deployments.

Benefits Competitive compensation + equity

401k (no match)

Healthcare (Silver PPO Medical, Vision, Dental)

Lunch and snacks at the office

We encourage you to apply even if you don't meet every requirement — we value curiosity, creativity, and the drive to solve hard problems.

#J-18808-Ljbffr