HUM, Inc.
Overview
Senior Machine Learning Engineer — Location: SF or Waterloo, with ability to travel. Start Date: Flexible, ideally Q3 2025. About Hum.ai
Hum.ai is building planetary superintelligence. Backed by top funds, we’ve raised $10M+ and are now heads down building. Join us at the cutting edge, where we’re scaling generative transformer diffusion models, designing next-gen benchmarks, and engineering foundation models that go far beyond LLMs. You’ll be at the core of a moonshot journey to define what’s next in agentic AI and frontier model capabilities. We are looking for an experienced Senior Machine Learning Engineer who is eager to advance the frontier of AI, help us design, build, and scale end-to-end novel foundation models, and leverage their hands-on experience implementing a wide range of pre-training and post-training models, including large foundation models (beyond just LLM fine-tuning). This role is focused on : Designing, implementing, and scaling state-of-the-art models Productionizing research codes, models and technologically complex systems Shaping benchmark design and model evaluation frameworks Building agentic AI capabilities and long-term technical bets Role and responsibilities
Collaborating with researchers and scientists to implement, evaluate and scale proof-of-concept models. Owning, implementing and integrating the latest state-of-the-art methods and external open-source codes. Develop AI systems capable of accurately understanding the universe and generating new knowledge. Training multi-modal models supporting different sensor and other modalities like text About the role
The role will involve : Collaborating with researchers and scientists to implement, evaluate and scale proof-of-concept models. Owning, implementing and integrating the latest state-of-the-art methods and external open-source codes. Develop AI systems capable of accurately understanding the universe and generating new knowledge. Training multi-modal models supporting different sensor and other modalities like text What do we do?
We’re building multimodal foundation models for the natural world. We believe there’s more to the world than the internet + more to intelligence than memorizing the internet. Our models are trained on satellite remote sensing and real world ground truth data, and are used by our customers in nature conservation, carbon dioxide removal, and government to protect and positively impact our increasingly changing world. Our ultimate goal is to build AGI of the natural world. Requirements
Bachelor’s degree in computer science, engineering, a related field, or equivalent experience. 5+ years of relevant work experience. Prior experience building distributed training pipelines for multi-node systems using PyTorch and Ray. Experience training large diffusion or transformer models. Preferably on video or time series data. Proficiency with Python, Ray Trainer, PyTorch, and Anyscale framework. Familiarity with cloud platforms such as AWS, GCP, or Azure. Nice to have
Past training of video or time-series models Startup experience, comfortable with a small dynamic team. Location
Location wise, strong preference for in-person in Waterloo or San Francisco however remote work is possible for exceptional candidates. #J-18808-Ljbffr
Senior Machine Learning Engineer — Location: SF or Waterloo, with ability to travel. Start Date: Flexible, ideally Q3 2025. About Hum.ai
Hum.ai is building planetary superintelligence. Backed by top funds, we’ve raised $10M+ and are now heads down building. Join us at the cutting edge, where we’re scaling generative transformer diffusion models, designing next-gen benchmarks, and engineering foundation models that go far beyond LLMs. You’ll be at the core of a moonshot journey to define what’s next in agentic AI and frontier model capabilities. We are looking for an experienced Senior Machine Learning Engineer who is eager to advance the frontier of AI, help us design, build, and scale end-to-end novel foundation models, and leverage their hands-on experience implementing a wide range of pre-training and post-training models, including large foundation models (beyond just LLM fine-tuning). This role is focused on : Designing, implementing, and scaling state-of-the-art models Productionizing research codes, models and technologically complex systems Shaping benchmark design and model evaluation frameworks Building agentic AI capabilities and long-term technical bets Role and responsibilities
Collaborating with researchers and scientists to implement, evaluate and scale proof-of-concept models. Owning, implementing and integrating the latest state-of-the-art methods and external open-source codes. Develop AI systems capable of accurately understanding the universe and generating new knowledge. Training multi-modal models supporting different sensor and other modalities like text About the role
The role will involve : Collaborating with researchers and scientists to implement, evaluate and scale proof-of-concept models. Owning, implementing and integrating the latest state-of-the-art methods and external open-source codes. Develop AI systems capable of accurately understanding the universe and generating new knowledge. Training multi-modal models supporting different sensor and other modalities like text What do we do?
We’re building multimodal foundation models for the natural world. We believe there’s more to the world than the internet + more to intelligence than memorizing the internet. Our models are trained on satellite remote sensing and real world ground truth data, and are used by our customers in nature conservation, carbon dioxide removal, and government to protect and positively impact our increasingly changing world. Our ultimate goal is to build AGI of the natural world. Requirements
Bachelor’s degree in computer science, engineering, a related field, or equivalent experience. 5+ years of relevant work experience. Prior experience building distributed training pipelines for multi-node systems using PyTorch and Ray. Experience training large diffusion or transformer models. Preferably on video or time series data. Proficiency with Python, Ray Trainer, PyTorch, and Anyscale framework. Familiarity with cloud platforms such as AWS, GCP, or Azure. Nice to have
Past training of video or time-series models Startup experience, comfortable with a small dynamic team. Location
Location wise, strong preference for in-person in Waterloo or San Francisco however remote work is possible for exceptional candidates. #J-18808-Ljbffr