Together
LLM Training Frameworks and Optimization Engineer
Together, San Francisco, California, United States, 94199
LLM Training Frameworks and Optimization Engineer
About the Role
At Together.ai, we are building cutting‑edge infrastructure to enable efficient and scalable training of large language models (LLMs). We focus on optimizing training frameworks, algorithms, and infrastructure to push the boundaries of AI performance, scalability, and cost‑efficiency.
We are seeking an LLM Training Frameworks and Optimization Engineer to drive innovations in the development and optimization of distributed training frameworks. In this role, you will ensure that our LLM training pipelines are robust, efficient, and capable of handling the complexities of large‑scale distributed systems.
Responsibilities
Framework Development and Optimization:
Design, implement, and optimize distributed training frameworks tailored for large language models.
Develop custom modules, plugins, and features to enhance framework scalability and performance.
Algorithmic and Systems Optimization:
Optimize communication patterns (e.g., gradient synchronization, all‑reduce) in distributed training.
Implement techniques like mixed precision, tensor parallelism, pipeline parallelism, and sharded training.
Performance Tuning:
Conduct in‑depth profiling and debugging of training jobs to identify and resolve bottlenecks.
Collaborate with hardware teams to optimize performance for GPUs, TPUs, and other accelerators.
Scalability and Resilience:
Ensure training systems scale efficiently to thousands of nodes and petabytes of data.
Develop resilience mechanisms for fault‑tolerant and checkpointed training pipelines.
Collaboration and Support:
Work closely with researchers, data engineers, and platform teams to ensure training frameworks meet model and workload requirements.
Provide guidance and tools to improve the overall efficiency of the LLM development lifecycle.
Requirements Must‑Have
Experience:
5+ years of experience in deep learning frameworks, distributed systems, or machine learning infrastructure.
Technical Skills:
Expertise in distributed training frameworks (e.g., PyTorch DDP, DeepSpeed, Megatron‑LM, TensorFlow XLA).
Strong understanding of parallelism techniques (e.g., data, tensor, pipeline, and ZeRO‑based parallelism).
Familiarity with GPU/TPU hardware and deep learning performance optimizations.
Programming:
Proficient in Python and C++ or CUDA for high‑performance computing.
Experience with memory optimization techniques (e.g., activation checkpointing, gradient sharding).
Knowledge:
Training dynamics for large‑scale LLMs, including hyperparameter tuning and optimization.
Soft Skills:
Analytical problem‑solving skills and a focus on performance improvement.
Strong collaboration and communication skills across teams.
Nice‑to‑Have
Familiarity with graph optimization and compiler‑level performance tuning.
Contributions to open‑source deep learning or distributed training projects.
Experience with low‑level hardware optimizations (e.g., kernel fusion, custom CUDA kernels).
Compensation Competitive compensation, startup equity, health insurance, and other benefits. The US base salary range for this full‑time position is: $160,000 – $230,000 + equity + benefits.
Equal Opportunity Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
About Together AI Together AI is a research‑driven artificial intelligence company focused on lowering the cost of modern AI systems by co‑designing software, hardware, algorithms, and models. We contribute to leading open‑source research, models, and datasets, and our team has been behind technological advances such as FlashAttention, Hyena, FlexGen, and RedPajama.
#J-18808-Ljbffr
We are seeking an LLM Training Frameworks and Optimization Engineer to drive innovations in the development and optimization of distributed training frameworks. In this role, you will ensure that our LLM training pipelines are robust, efficient, and capable of handling the complexities of large‑scale distributed systems.
Responsibilities
Framework Development and Optimization:
Design, implement, and optimize distributed training frameworks tailored for large language models.
Develop custom modules, plugins, and features to enhance framework scalability and performance.
Algorithmic and Systems Optimization:
Optimize communication patterns (e.g., gradient synchronization, all‑reduce) in distributed training.
Implement techniques like mixed precision, tensor parallelism, pipeline parallelism, and sharded training.
Performance Tuning:
Conduct in‑depth profiling and debugging of training jobs to identify and resolve bottlenecks.
Collaborate with hardware teams to optimize performance for GPUs, TPUs, and other accelerators.
Scalability and Resilience:
Ensure training systems scale efficiently to thousands of nodes and petabytes of data.
Develop resilience mechanisms for fault‑tolerant and checkpointed training pipelines.
Collaboration and Support:
Work closely with researchers, data engineers, and platform teams to ensure training frameworks meet model and workload requirements.
Provide guidance and tools to improve the overall efficiency of the LLM development lifecycle.
Requirements Must‑Have
Experience:
5+ years of experience in deep learning frameworks, distributed systems, or machine learning infrastructure.
Technical Skills:
Expertise in distributed training frameworks (e.g., PyTorch DDP, DeepSpeed, Megatron‑LM, TensorFlow XLA).
Strong understanding of parallelism techniques (e.g., data, tensor, pipeline, and ZeRO‑based parallelism).
Familiarity with GPU/TPU hardware and deep learning performance optimizations.
Programming:
Proficient in Python and C++ or CUDA for high‑performance computing.
Experience with memory optimization techniques (e.g., activation checkpointing, gradient sharding).
Knowledge:
Training dynamics for large‑scale LLMs, including hyperparameter tuning and optimization.
Soft Skills:
Analytical problem‑solving skills and a focus on performance improvement.
Strong collaboration and communication skills across teams.
Nice‑to‑Have
Familiarity with graph optimization and compiler‑level performance tuning.
Contributions to open‑source deep learning or distributed training projects.
Experience with low‑level hardware optimizations (e.g., kernel fusion, custom CUDA kernels).
Compensation Competitive compensation, startup equity, health insurance, and other benefits. The US base salary range for this full‑time position is: $160,000 – $230,000 + equity + benefits.
Equal Opportunity Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
About Together AI Together AI is a research‑driven artificial intelligence company focused on lowering the cost of modern AI systems by co‑designing software, hardware, algorithms, and models. We contribute to leading open‑source research, models, and datasets, and our team has been behind technological advances such as FlashAttention, Hyena, FlexGen, and RedPajama.
#J-18808-Ljbffr