Velorum Partners
A low-profile, elite trading firm is hiring an exceptional
LLM / ML Researcher
to join its AI group focused on high-performance applications in live markets. This isn’t a product company or a research lab. You’ll ship models that directly power real-world decisions, and do it alongside one of the most advanced engineering teams in the industry.
You can get further details about the nature of this opening, and what is expected from applicants, by reading the below.
What You’ll Do Design and train large-scale
language models
for structured and unstructured data. Tackle optimisation problems across training efficiency, inference latency, and memory use. Apply
RLHF ,
DPO , and other state-of-the-art methods to extract signal from noise. Work cross-functionally with researchers, engineers, and trading teams to deploy and iterate quickly.
What We’re Looking For Deep understanding of
transformer architectures , attention, and model internals. Experience with
pre-training ,
fine-tuning ,
RLHF , and parallel training pipelines. Skilled in
Python ,
PyTorch
or
TensorFlow. Familiar with
GPU memory tuning ,
float16/bfloat16 , and distributed compute. Bonus: experience with
RAG , numerical methods, or ML in high-throughput systems.
This opportunity is confidential, and the impact is massive. If you're curious, message me or apply directly.
LLM / ML Researcher
to join its AI group focused on high-performance applications in live markets. This isn’t a product company or a research lab. You’ll ship models that directly power real-world decisions, and do it alongside one of the most advanced engineering teams in the industry.
You can get further details about the nature of this opening, and what is expected from applicants, by reading the below.
What You’ll Do Design and train large-scale
language models
for structured and unstructured data. Tackle optimisation problems across training efficiency, inference latency, and memory use. Apply
RLHF ,
DPO , and other state-of-the-art methods to extract signal from noise. Work cross-functionally with researchers, engineers, and trading teams to deploy and iterate quickly.
What We’re Looking For Deep understanding of
transformer architectures , attention, and model internals. Experience with
pre-training ,
fine-tuning ,
RLHF , and parallel training pipelines. Skilled in
Python ,
PyTorch
or
TensorFlow. Familiar with
GPU memory tuning ,
float16/bfloat16 , and distributed compute. Bonus: experience with
RAG , numerical methods, or ML in high-throughput systems.
This opportunity is confidential, and the impact is massive. If you're curious, message me or apply directly.