Fireworks.ai, Inc.
Member of Technical Staff, AI Training Infrastructure San Mateo, CA
Fireworks.ai, Inc., San Mateo, California, United States, 94409
Software Engineer, AI Training Infrastructure
San Mateo, CA
About Us At Fireworks, we’re building the future of generative AI infrastructure. Our platform delivers the highest‑quality models with the fastest and most scalable inference in the industry. We’ve been independently benchmarked as the leader in LLM inference speed and are driving cutting‑edge innovation through projects like our own function calling and multimodal models. Fireworks is a Series C company valued at $4 billion and backed by top investors including Benchmark, Sequoia, Lightspeed, Index, and Evantic. We’re an ambitious, collaborative team of builders, founded by veterans of Meta PyTorch and Google Vertex AI.
The Role As a Training Infrastructure Engineer, you’ll design, build, and optimize the infrastructure that powers our large‑scale model training operations. Your work will be essential to developing high‑performance AI training infrastructure. You’ll collaborate with AI researchers and engineers to create robust training pipelines, optimize distributed training workloads, and ensure reliable model development.
Key Responsibilities
Design and implement scalable infrastructure for large‑scale model training workloads
Develop and maintain distributed training pipelines for LLMs and multimodal models
Optimize training performance across multiple GPUs, nodes, and data centers
Implement monitoring, logging, and debugging tools for training operations
Architect and maintain data storage solutions for large‑scale training datasets
Automate infrastructure provisioning, scaling, and orchestration for model training
Collaborate with researchers to implement and optimize training methodologies
Analyze and improve efficiency, scalability, and cost‑effectiveness of training systems
Troubleshoot complex performance issues in distributed training environments
Minimum Qualifications
Bachelor’s degree in Computer Science, Computer Engineering, or related field, or equivalent practical experience
3+ years of experience with distributed systems and ML infrastructure
Experience with PyTorch
Proficiency in cloud platforms (AWS, GCP, Azure)
Experience with containerization, orchestration (Kubernetes, Docker)
Knowledge of distributed training techniques (data parallelism, model parallelism, FSDP)
Preferred Qualifications
Master’s or PhD in Computer Science or related field
Experience training large language models or multimodal AI systems
Experience with ML workflow orchestration tools
Background in optimizing high‑performance distributed computing systems
Familiarity with ML DevOps practices
Contributions to open‑source ML infrastructure or related projects
Compensation Total compensation for this role also includes meaningful equity in a fast‑growing startup, along with a competitive salary and comprehensive benefits package. Base salary is determined by a range of factors including individual qualifications, experience, skills, interview performance, market data, and work location. The listed salary range is intended as a guideline and may be adjusted.
$175,000 - $220,000 USD
Why Fireworks AI
Solve Hard Problems: Tackle challenges at the forefront of AI infrastructure, from low‑latency inference to scalable model serving.
Build What’s Next: Work with bleeding‑edge technology that impacts how businesses and developers harness AI globally.
Ownership & Impact: Join a fast‑growing, passionate team where your work directly shapes the future of AI—no bureaucracy, just results.
Learn from the Best: Collaborate with world‑class engineers and AI researchers who thrive on curiosity and innovation.
Fireworks AI is an equal‑opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all innovators.
#J-18808-Ljbffr
About Us At Fireworks, we’re building the future of generative AI infrastructure. Our platform delivers the highest‑quality models with the fastest and most scalable inference in the industry. We’ve been independently benchmarked as the leader in LLM inference speed and are driving cutting‑edge innovation through projects like our own function calling and multimodal models. Fireworks is a Series C company valued at $4 billion and backed by top investors including Benchmark, Sequoia, Lightspeed, Index, and Evantic. We’re an ambitious, collaborative team of builders, founded by veterans of Meta PyTorch and Google Vertex AI.
The Role As a Training Infrastructure Engineer, you’ll design, build, and optimize the infrastructure that powers our large‑scale model training operations. Your work will be essential to developing high‑performance AI training infrastructure. You’ll collaborate with AI researchers and engineers to create robust training pipelines, optimize distributed training workloads, and ensure reliable model development.
Key Responsibilities
Design and implement scalable infrastructure for large‑scale model training workloads
Develop and maintain distributed training pipelines for LLMs and multimodal models
Optimize training performance across multiple GPUs, nodes, and data centers
Implement monitoring, logging, and debugging tools for training operations
Architect and maintain data storage solutions for large‑scale training datasets
Automate infrastructure provisioning, scaling, and orchestration for model training
Collaborate with researchers to implement and optimize training methodologies
Analyze and improve efficiency, scalability, and cost‑effectiveness of training systems
Troubleshoot complex performance issues in distributed training environments
Minimum Qualifications
Bachelor’s degree in Computer Science, Computer Engineering, or related field, or equivalent practical experience
3+ years of experience with distributed systems and ML infrastructure
Experience with PyTorch
Proficiency in cloud platforms (AWS, GCP, Azure)
Experience with containerization, orchestration (Kubernetes, Docker)
Knowledge of distributed training techniques (data parallelism, model parallelism, FSDP)
Preferred Qualifications
Master’s or PhD in Computer Science or related field
Experience training large language models or multimodal AI systems
Experience with ML workflow orchestration tools
Background in optimizing high‑performance distributed computing systems
Familiarity with ML DevOps practices
Contributions to open‑source ML infrastructure or related projects
Compensation Total compensation for this role also includes meaningful equity in a fast‑growing startup, along with a competitive salary and comprehensive benefits package. Base salary is determined by a range of factors including individual qualifications, experience, skills, interview performance, market data, and work location. The listed salary range is intended as a guideline and may be adjusted.
$175,000 - $220,000 USD
Why Fireworks AI
Solve Hard Problems: Tackle challenges at the forefront of AI infrastructure, from low‑latency inference to scalable model serving.
Build What’s Next: Work with bleeding‑edge technology that impacts how businesses and developers harness AI globally.
Ownership & Impact: Join a fast‑growing, passionate team where your work directly shapes the future of AI—no bureaucracy, just results.
Learn from the Best: Collaborate with world‑class engineers and AI researchers who thrive on curiosity and innovation.
Fireworks AI is an equal‑opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all innovators.
#J-18808-Ljbffr