Centific
PhD Research Intern
Speech AI Centific AI Research Fulltime - 40 hours per week
Summary Centific AI Research seeks a PhD Research Intern to design and evaluate speechfirst models, with a focus on Spoken Language Models (SLMs) that reason over audio and interact conversationally. Youll move ideas from prototype to practical demos, working with scientists and engineers to deliver measurable impact.
Scope of Work Endtoend speech dialogue systems (speechin/speechout) and speechaware LLMs. Alignment between speech encoders and text backbones via lightweight adapters. Efficient speech tokenization and temporal compression suitable for longform audio. Reliable evaluation across recognition, understanding, and generation tasksincluding robustness and safety. Latencyaware inference for streaming and realtime user experiences.
Example Projects Prototype a conversational SLM using an SSL speech encoder and a compact adapter on an existing LLM; compare against strong baselines. Create a data recipe that blends conversational speech with instructionfollowing corpora; run targeted ablations and report findings. Build an evaluation harness that covers ASR/ST/SLU and speech QA, including streaming metrics (latency, stability, endpointing). Ship a minimal demo with streaming inference and logging; document setup, metrics, and reliability checks. Author a crisp internal writeup: goals, design choices, results, and next steps for productionization.
Minimum Qualifications PhD candidate in CS/EE (or related) with research in speech, audio ML, or multimodal LMs. Fluency in Python and PyTorch, with handson GPU training; familiarity with torchaudio or librosa. Working knowledge of modern sequence models (Transformers or SSMs) and training best practices. Depth in at least one area: (a) discrete speech tokens/temporal compression, (b) modality alignment to LLMs via adapters, or (c) posttraining/instruction tuning for speech tasks. Strong experimentation habits: clean code, ablations, reproducibility, and clear reporting.
Preferred Qualifications Experience with speech generation (neural codecs/vocoders) or hybrid text+speech decoding. Background in multilingual or codeswitching speech and domain adaptation. Handson work evaluating safety, bias, hallucination, or spoofing risks in speech systems. Distributed training/serving (FSDP/DeepSpeed), and experience with ESPnet, SpeechBrain, or NVIDIA NeMo.
Tech Stack PyTorch, CUDA, torchaudio/librosa; experiment tracking (e.g., Weights & Biases). LLM backbones with lightweight adapters; neural audio codecs and vocoders as needed. FastAPI/gRPC for services; ONNX/TensorRT and quantization for efficient inference.
What We Offer Competitive stipend and hands-on projects with measurable real-world impact. Mentorship from applied scientists and engineers; opportunities to publish and present. Access to modern GPU infrastructure and a supportive environment for fast, responsible experimentation. Flexible location and schedule options, subject to team needs.
Benefits: Comprehensive healthcare, dental, and vision coverage 401k plan Paid time off (PTO) And more!
Learn more about us at
centific.com.
Centific is an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, ancestry, citizenship status, age, mental or physical disability, medical condition, sex (including pregnancy), gender identity or expression, sexual orientation, marital status, familial status, veteran status, or any other characteristic protected by applicable law. We consider qualified applicants regardless of criminal histories, consistent with legal requirements.
Speech AI Centific AI Research Fulltime - 40 hours per week
Summary Centific AI Research seeks a PhD Research Intern to design and evaluate speechfirst models, with a focus on Spoken Language Models (SLMs) that reason over audio and interact conversationally. Youll move ideas from prototype to practical demos, working with scientists and engineers to deliver measurable impact.
Scope of Work Endtoend speech dialogue systems (speechin/speechout) and speechaware LLMs. Alignment between speech encoders and text backbones via lightweight adapters. Efficient speech tokenization and temporal compression suitable for longform audio. Reliable evaluation across recognition, understanding, and generation tasksincluding robustness and safety. Latencyaware inference for streaming and realtime user experiences.
Example Projects Prototype a conversational SLM using an SSL speech encoder and a compact adapter on an existing LLM; compare against strong baselines. Create a data recipe that blends conversational speech with instructionfollowing corpora; run targeted ablations and report findings. Build an evaluation harness that covers ASR/ST/SLU and speech QA, including streaming metrics (latency, stability, endpointing). Ship a minimal demo with streaming inference and logging; document setup, metrics, and reliability checks. Author a crisp internal writeup: goals, design choices, results, and next steps for productionization.
Minimum Qualifications PhD candidate in CS/EE (or related) with research in speech, audio ML, or multimodal LMs. Fluency in Python and PyTorch, with handson GPU training; familiarity with torchaudio or librosa. Working knowledge of modern sequence models (Transformers or SSMs) and training best practices. Depth in at least one area: (a) discrete speech tokens/temporal compression, (b) modality alignment to LLMs via adapters, or (c) posttraining/instruction tuning for speech tasks. Strong experimentation habits: clean code, ablations, reproducibility, and clear reporting.
Preferred Qualifications Experience with speech generation (neural codecs/vocoders) or hybrid text+speech decoding. Background in multilingual or codeswitching speech and domain adaptation. Handson work evaluating safety, bias, hallucination, or spoofing risks in speech systems. Distributed training/serving (FSDP/DeepSpeed), and experience with ESPnet, SpeechBrain, or NVIDIA NeMo.
Tech Stack PyTorch, CUDA, torchaudio/librosa; experiment tracking (e.g., Weights & Biases). LLM backbones with lightweight adapters; neural audio codecs and vocoders as needed. FastAPI/gRPC for services; ONNX/TensorRT and quantization for efficient inference.
What We Offer Competitive stipend and hands-on projects with measurable real-world impact. Mentorship from applied scientists and engineers; opportunities to publish and present. Access to modern GPU infrastructure and a supportive environment for fast, responsible experimentation. Flexible location and schedule options, subject to team needs.
Benefits: Comprehensive healthcare, dental, and vision coverage 401k plan Paid time off (PTO) And more!
Learn more about us at
centific.com.
Centific is an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, ancestry, citizenship status, age, mental or physical disability, medical condition, sex (including pregnancy), gender identity or expression, sexual orientation, marital status, familial status, veteran status, or any other characteristic protected by applicable law. We consider qualified applicants regardless of criminal histories, consistent with legal requirements.