Velorum Partners
A low-profile, elite trading firm is hiring an exceptional
LLM / ML Researcher
to join its AI group focused on high-performance applications in live markets. This isnt a product company or a research lab. Youll ship models that directly power real-world decisions, and do it alongside one of the most advanced engineering teams in the industry. What Youll Do Design and train large-scale
language models
for structured and unstructured data. Tackle optimisation problems across training efficiency, inference latency, and memory use. Apply
RLHF ,
DPO , and other state-of-the-art methods to extract signal from noise. Work cross-functionally with researchers, engineers, and trading teams to deploy and iterate quickly. What Were Looking For Deep understanding of
transformer architectures , attention, and model internals. Experience with
pre-training ,
fine-tuning ,
RLHF , and parallel training pipelines. Skilled in
Python ,
PyTorch
or
TensorFlow. Familiar with
GPU memory tuning ,
float16/bfloat16 , and distributed compute. Bonus: experience with
RAG , numerical methods, or ML in high-throughput systems. This opportunity is confidential, and the impact is massive. If you're curious, message me or apply directly.
LLM / ML Researcher
to join its AI group focused on high-performance applications in live markets. This isnt a product company or a research lab. Youll ship models that directly power real-world decisions, and do it alongside one of the most advanced engineering teams in the industry. What Youll Do Design and train large-scale
language models
for structured and unstructured data. Tackle optimisation problems across training efficiency, inference latency, and memory use. Apply
RLHF ,
DPO , and other state-of-the-art methods to extract signal from noise. Work cross-functionally with researchers, engineers, and trading teams to deploy and iterate quickly. What Were Looking For Deep understanding of
transformer architectures , attention, and model internals. Experience with
pre-training ,
fine-tuning ,
RLHF , and parallel training pipelines. Skilled in
Python ,
PyTorch
or
TensorFlow. Familiar with
GPU memory tuning ,
float16/bfloat16 , and distributed compute. Bonus: experience with
RAG , numerical methods, or ML in high-throughput systems. This opportunity is confidential, and the impact is massive. If you're curious, message me or apply directly.