Logo
Signify Technology

AI Performance Software Engineer

Signify Technology, San Francisco, California, United States, 94199

Save Job

AI Performance Engineer – CUDA & PyTorch Focus Location: San Fransisco, CA Compensation: $200,000-$300,000

All potential candidates should read through the following details of this job with care before making an application.

A stealth-mode AI systems company is reimagining how large-scale inference is done. With generative AI workloads scaling rapidly, inference efficiency has become a critical bottleneck. We're building an integrated hardware-software platform that brings breakthrough performance and usability to production-scale LLM applications.

This is an opportunity to work on a highly technical team spun out of top-tier academic research, focused on the cutting edge of AI, distributed systems, and performance optimization.

What You’ll Do: Drive core research and implementation of performance optimizations for modern AI models Implement advanced techniques like FlashAttention, KV caching, quantization, and model compression Design and build scalable, distributed compute strategies across GPU-based systems Profile, benchmark, and optimize CUDA kernels and AI runtime performance across inference stacks Work across frameworks like PyTorch, ONNX, and vLLM to improve end-to-end efficiency

What We're Looking For: Strong background in CUDA and low-level GPU performance tuning Proven experience building with PyTorch and deploying high-performance ML models Proficiency in Python and C++ Experience with large-scale distributed systems in cloud environments (AWS, GCP, or Azure) Exposure to AI compilers or frameworks like MLIR is a plus Interest in system design, scalability, and accelerating LLM workloads in real production environments

If you’ve spent your time making large models faster, leaner, and more efficient—and want to solve hard technical problems at the core of GenAI infrastructure—this role is for you. Reach out to learn more.