Hippocratic AI
About Us
Hippocratic AI has developed a safety-focused Large Language Model (LLM) for healthcare. The company believes that a safe LLM can dramatically improve healthcare accessibility and health outcomes in the world by bringing deep healthcare expertise to every human. No other technology has the potential to have this level of global impact on health. Why Join Our Team
Innovative Mission: We are developing a safe, healthcare-focused large language model (LLM) designed to revolutionize health outcomes on a global scale. Visionary Leadership: Hippocratic AI was co-founded by CEO Munjal Shah, alongside a group of physicians, hospital administrators, healthcare professionals, and artificial intelligence researchers from leading institutions, including El Camino Health, Johns Hopkins, Stanford, Microsoft, Google, and NVIDIA. Strategic Investors: We have raised a total of $278 million in funding, backed by top investors such as Andreessen Horowitz, General Catalyst, Kleiner Perkins, NVIDIA's NVentures, Premji Invest, SV Angel, and six health systems. World-Class Team: Our team is composed of leading experts in healthcare and artificial intelligence, ensuring our technology is safe, effective, and capable of delivering meaningful improvements to healthcare delivery and outcomes. For more information, visit www.HippocraticAI.com. We value in-person teamwork and believe the best ideas happen together. Our team is expected to be in the office five days a week in Palo Alto, CA unless explicitly noted otherwise in the job description. About the Role
We're seeking an experienced LLM Inference Engineer to optimize our large language model (LLM) serving infrastructure. The ideal candidate has: Extensive hands-on experience with state-of-the-art inference optimization techniques
A track record of deploying efficient, scalable LLM systems in production environments
Key Responsibilities
Design and implement multi-node serving architectures for distributed LLM inference
Optimize multi-LoRA serving systems
Apply advanced quantization techniques (FP4/FP6) to reduce model footprint while preserving quality
Implement speculative decoding and other latency optimization strategies
Develop disaggregated serving solutions with optimized caching strategies for prefill and decoding phases
Continuously benchmark and improve system performance across various deployment scenarios and GPU types
Required Qualifications
2+ years of experience optimizing LLM inference systems at scale
Proven expertise with distributed serving architectures for large language models
Hands-on experience implementing quantization techniques for transformer models
Strong understanding of modern inference optimization methods, including: Speculative decoding techniques with draft models
Eagle speculative decoding approaches
Proficiency in Python and C++
Experience with CUDA programming and GPU optimization (familiarity required, expert-level not necessary)
Preferred Qualifications
Contributions to open-source inference frameworks such as vLLM, SGLang, or TensorRT-LLM
Experience with custom CUDA kernels
Track record of deploying inference systems in production environments
Deep understanding of performance optimization systems
Show us what you've built: Tell us about an LLM inference or training project that makes you proud! Whether you've optimized inference pipelines to achieve breakthrough performance, designed innovative training techniques, or built systems that scale to billions of parameters - we want to hear your story. Open source contributor? Even better! If you've contributed to projects like vllm, sglang, lmdeploy or similar LLM optimization frameworks, we'd love to see your PRs. Your contributions to these communities demonstrate exactly the kind of collaborative innovation we value.
Join a team where your expertise won't just be appreciatedit will be celebrated and amplified. Help us shape the future of AI deployment at scale! References
1. Polaris: A Safety-focused LLM Constellation Architecture for Healthcare, https://arxiv.org/abs/2403.133132. Polaris 2: https://www.hippocraticai.com/polaris23. Personalized Interactions: https://www.hippocraticai.com/personalized-interactions4. Human Touch in AI: https://www.hippocraticai.com/the-human-touch-in-ai5. Polaris 1: https://www.hippocraticai.com/research/polaris7. Research and clinical blogs: https://www.hippocraticai.com/research
Hippocratic AI has developed a safety-focused Large Language Model (LLM) for healthcare. The company believes that a safe LLM can dramatically improve healthcare accessibility and health outcomes in the world by bringing deep healthcare expertise to every human. No other technology has the potential to have this level of global impact on health. Why Join Our Team
Innovative Mission: We are developing a safe, healthcare-focused large language model (LLM) designed to revolutionize health outcomes on a global scale. Visionary Leadership: Hippocratic AI was co-founded by CEO Munjal Shah, alongside a group of physicians, hospital administrators, healthcare professionals, and artificial intelligence researchers from leading institutions, including El Camino Health, Johns Hopkins, Stanford, Microsoft, Google, and NVIDIA. Strategic Investors: We have raised a total of $278 million in funding, backed by top investors such as Andreessen Horowitz, General Catalyst, Kleiner Perkins, NVIDIA's NVentures, Premji Invest, SV Angel, and six health systems. World-Class Team: Our team is composed of leading experts in healthcare and artificial intelligence, ensuring our technology is safe, effective, and capable of delivering meaningful improvements to healthcare delivery and outcomes. For more information, visit www.HippocraticAI.com. We value in-person teamwork and believe the best ideas happen together. Our team is expected to be in the office five days a week in Palo Alto, CA unless explicitly noted otherwise in the job description. About the Role
We're seeking an experienced LLM Inference Engineer to optimize our large language model (LLM) serving infrastructure. The ideal candidate has: Extensive hands-on experience with state-of-the-art inference optimization techniques
A track record of deploying efficient, scalable LLM systems in production environments
Key Responsibilities
Design and implement multi-node serving architectures for distributed LLM inference
Optimize multi-LoRA serving systems
Apply advanced quantization techniques (FP4/FP6) to reduce model footprint while preserving quality
Implement speculative decoding and other latency optimization strategies
Develop disaggregated serving solutions with optimized caching strategies for prefill and decoding phases
Continuously benchmark and improve system performance across various deployment scenarios and GPU types
Required Qualifications
2+ years of experience optimizing LLM inference systems at scale
Proven expertise with distributed serving architectures for large language models
Hands-on experience implementing quantization techniques for transformer models
Strong understanding of modern inference optimization methods, including: Speculative decoding techniques with draft models
Eagle speculative decoding approaches
Proficiency in Python and C++
Experience with CUDA programming and GPU optimization (familiarity required, expert-level not necessary)
Preferred Qualifications
Contributions to open-source inference frameworks such as vLLM, SGLang, or TensorRT-LLM
Experience with custom CUDA kernels
Track record of deploying inference systems in production environments
Deep understanding of performance optimization systems
Show us what you've built: Tell us about an LLM inference or training project that makes you proud! Whether you've optimized inference pipelines to achieve breakthrough performance, designed innovative training techniques, or built systems that scale to billions of parameters - we want to hear your story. Open source contributor? Even better! If you've contributed to projects like vllm, sglang, lmdeploy or similar LLM optimization frameworks, we'd love to see your PRs. Your contributions to these communities demonstrate exactly the kind of collaborative innovation we value.
Join a team where your expertise won't just be appreciatedit will be celebrated and amplified. Help us shape the future of AI deployment at scale! References
1. Polaris: A Safety-focused LLM Constellation Architecture for Healthcare, https://arxiv.org/abs/2403.133132. Polaris 2: https://www.hippocraticai.com/polaris23. Personalized Interactions: https://www.hippocraticai.com/personalized-interactions4. Human Touch in AI: https://www.hippocraticai.com/the-human-touch-in-ai5. Polaris 1: https://www.hippocraticai.com/research/polaris7. Research and clinical blogs: https://www.hippocraticai.com/research