Aldea
About Aldea
Aldea is a multi-modal foundational AI company changing the scaling laws of AI. We believe that today's models and model architectures present significant bottlenecks to the transformation of software. We are building the models that will power the next era of software.
The Role
We're hiring a Foundational AI Researcher to join our research team. This role is critical to advancing our AI capabilities and sits at the forefront of developing next-generation AI architectures for our voice-based applications.
You will conduct cutting-edge research around novel foundational LLM architectures and multi-modal model architectures. Your work will directly impact our product capabilities and push the boundaries of what's possible in AI-powered expert advisory systems.
Your core responsibility will be to research, design, and prototype advanced AI architectures that enhance our platform's performance and capabilities. You will work on fundamental problems in AI that require deep technical expertise and innovative thinking.
This role requires extensive experience in foundational AI research with hands-on experience in model training, architecture design, or performance optimization. You must be able to work independently on complex research problems while collaborating with our engineering and product teams.
What you'll do Perform research around novel foundational LLM architectures and multi-modal model architectures Design and prototype experimental AI models and training methodologies Collaborate with engineering teams to implement research findings into production systems Stay current with latest developments in AI research and identify opportunities for innovation Publish research findings and contribute to the broader AI research community Work cross-functionally with product and engineering teams to align research with business objectives Requirements
You must have experience in one of the following areas:
Pre-training or post-training LLMs (ideally more than just SFT fine-tuning) Training TTS (Text-to-Speech) or STT (Speech-to-Text) models Training multi-modal LLMs Optimizing CUDA kernels for AI workloads Additional Requirements:
PhD in Computer Science, Machine Learning, or related field, or equivalent industry experience Strong programming skills in Python and deep learning frameworks (PyTorch, Transformers, TRL) Experience with distributed training and large-scale model development Solid understanding of transformer architectures and attention mechanisms Track record of research contributions (publications, open-source projects, or industry impact) Nice to Have Experience in multiple areas from the must-have qualifications list Experience with voice-based AI applications Background in speech processing and audio model architectures Experience with model compression and efficiency optimization Familiarity with cloud computing platforms and MLOps practices Benefits
Compensation & Benefits We are a well-funded, Seed-stage company preparing for launch. We offer:
Competitive base salary Performance-based bonus based on achieving goals Equity participation Comprehensive benefits, including health, dental, vision, and paid time off Flexible work environment
Aldea is a multi-modal foundational AI company changing the scaling laws of AI. We believe that today's models and model architectures present significant bottlenecks to the transformation of software. We are building the models that will power the next era of software.
The Role
We're hiring a Foundational AI Researcher to join our research team. This role is critical to advancing our AI capabilities and sits at the forefront of developing next-generation AI architectures for our voice-based applications.
You will conduct cutting-edge research around novel foundational LLM architectures and multi-modal model architectures. Your work will directly impact our product capabilities and push the boundaries of what's possible in AI-powered expert advisory systems.
Your core responsibility will be to research, design, and prototype advanced AI architectures that enhance our platform's performance and capabilities. You will work on fundamental problems in AI that require deep technical expertise and innovative thinking.
This role requires extensive experience in foundational AI research with hands-on experience in model training, architecture design, or performance optimization. You must be able to work independently on complex research problems while collaborating with our engineering and product teams.
What you'll do Perform research around novel foundational LLM architectures and multi-modal model architectures Design and prototype experimental AI models and training methodologies Collaborate with engineering teams to implement research findings into production systems Stay current with latest developments in AI research and identify opportunities for innovation Publish research findings and contribute to the broader AI research community Work cross-functionally with product and engineering teams to align research with business objectives Requirements
You must have experience in one of the following areas:
Pre-training or post-training LLMs (ideally more than just SFT fine-tuning) Training TTS (Text-to-Speech) or STT (Speech-to-Text) models Training multi-modal LLMs Optimizing CUDA kernels for AI workloads Additional Requirements:
PhD in Computer Science, Machine Learning, or related field, or equivalent industry experience Strong programming skills in Python and deep learning frameworks (PyTorch, Transformers, TRL) Experience with distributed training and large-scale model development Solid understanding of transformer architectures and attention mechanisms Track record of research contributions (publications, open-source projects, or industry impact) Nice to Have Experience in multiple areas from the must-have qualifications list Experience with voice-based AI applications Background in speech processing and audio model architectures Experience with model compression and efficiency optimization Familiarity with cloud computing platforms and MLOps practices Benefits
Compensation & Benefits We are a well-funded, Seed-stage company preparing for launch. We offer:
Competitive base salary Performance-based bonus based on achieving goals Equity participation Comprehensive benefits, including health, dental, vision, and paid time off Flexible work environment