Meta
Software Engineer, Systems ML - Frameworks / Compilers / Kernels
Meta, Bellevue, Washington, us, 98009
Overview
Software Engineer, Systems ML - Frameworks / Compilers / Kernels
In this role, you will be a member of the MTIA (Meta Training & Inference Accelerator) Software team and part of the larger PyTorch AI framework organization. The MTIA Software Team develops a comprehensive AI Compiler strategy to train and serve new DL/ML model architectures with auto-tuned high performance for production environments across hardware architectures. You will work on core areas such as PyTorch framework components, AI compiler and runtime, high-performance kernels, and tooling to accelerate ML workloads on current and next-generation MTIA hardware platforms. You will collaborate with AI researchers to analyze models and lower them efficiently on MTIA hardware and partner with hardware design teams to develop compiler optimizations for high performance. You will apply software development best practices to design features, optimization, and performance tuning techniques and contribute to next-generation hardware-software co-design for AI domain-specific problems.
Responsibilities
Development of the software stack with a core focus on AI frameworks, compiler stack, high performance kernel development, and acceleration onto next-generation hardware architectures
Contribute to the development of the PyTorch AI framework core compilers to support new state-of-the-art inference and training hardware accelerators and optimize their performance
Analyze deep learning networks and develop & implement compiler optimization algorithms
Collaborate with AI research scientists to accelerate next-generation deep learning models (e.g., recommendation systems, generative AI, computer vision, NLP)
Performance tuning and optimizations of deep learning framework and software components
Minimum Qualifications
Proven C/C++ programming skills
Experience in AI framework development or accelerating deep learning models on hardware architectures
Bachelor's degree in Computer Science, Computer Engineering, a relevant technical field, or equivalent practical experience
Preferred Qualifications
AI Compiler: experience with compiler optimizations (loop optimizations, vectorization, parallelization) and hardware-specific optimizations; knowledge of MLIR, LLVM, IREE, XLA, TVM, Halide is a plus
AI frameworks: experience in developing training and inference framework components and system performance optimizations (latency, memory bandwidth, I/O, compute utilization) and tooling
AI high performance kernels: experience with CUDA, OpenMP/OpenCL, or kernel programming for AI hardware accelerators; experience accelerating libraries on AI hardware (e.g., cuBLAS, cuDNN, CUTLASS, HIP, ROCm)
Education/Experience: a Bachelor's degree in CS/CE or related field with 7+ years of relevant experience, or a Master’s with 4+ years, or a PhD with 3+ years in AI framework development or accelerating deep learning models on hardware architectures
Experience with frameworks such as PyTorch, Caffe2, TensorFlow, ONNX, TensorRT
Knowledge of GPU/CPU/AI hardware accelerator architectures
Pay and Benefits The base pay range is $70.67/hour to $208,000/year, with bonus, equity, and benefits. Individual compensation is determined by skills, qualifications, experience, and location. Compensation details listed reflect base pay only and do not include bonus, equity or sales incentives. Meta offers benefits and accommodations as applicable.
EEO and Accessibility Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based on race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity or expression, transgender status, age, protected veteran status, disability, or other legally protected characteristics. Meta participates in the E-Verify program where required. Meta may use AI/ML in the employment process. Meta is committed to providing reasonable accommodations for candidates with disabilities. If you need assistance, please contact accommodations-ext@fb.com.
#J-18808-Ljbffr
In this role, you will be a member of the MTIA (Meta Training & Inference Accelerator) Software team and part of the larger PyTorch AI framework organization. The MTIA Software Team develops a comprehensive AI Compiler strategy to train and serve new DL/ML model architectures with auto-tuned high performance for production environments across hardware architectures. You will work on core areas such as PyTorch framework components, AI compiler and runtime, high-performance kernels, and tooling to accelerate ML workloads on current and next-generation MTIA hardware platforms. You will collaborate with AI researchers to analyze models and lower them efficiently on MTIA hardware and partner with hardware design teams to develop compiler optimizations for high performance. You will apply software development best practices to design features, optimization, and performance tuning techniques and contribute to next-generation hardware-software co-design for AI domain-specific problems.
Responsibilities
Development of the software stack with a core focus on AI frameworks, compiler stack, high performance kernel development, and acceleration onto next-generation hardware architectures
Contribute to the development of the PyTorch AI framework core compilers to support new state-of-the-art inference and training hardware accelerators and optimize their performance
Analyze deep learning networks and develop & implement compiler optimization algorithms
Collaborate with AI research scientists to accelerate next-generation deep learning models (e.g., recommendation systems, generative AI, computer vision, NLP)
Performance tuning and optimizations of deep learning framework and software components
Minimum Qualifications
Proven C/C++ programming skills
Experience in AI framework development or accelerating deep learning models on hardware architectures
Bachelor's degree in Computer Science, Computer Engineering, a relevant technical field, or equivalent practical experience
Preferred Qualifications
AI Compiler: experience with compiler optimizations (loop optimizations, vectorization, parallelization) and hardware-specific optimizations; knowledge of MLIR, LLVM, IREE, XLA, TVM, Halide is a plus
AI frameworks: experience in developing training and inference framework components and system performance optimizations (latency, memory bandwidth, I/O, compute utilization) and tooling
AI high performance kernels: experience with CUDA, OpenMP/OpenCL, or kernel programming for AI hardware accelerators; experience accelerating libraries on AI hardware (e.g., cuBLAS, cuDNN, CUTLASS, HIP, ROCm)
Education/Experience: a Bachelor's degree in CS/CE or related field with 7+ years of relevant experience, or a Master’s with 4+ years, or a PhD with 3+ years in AI framework development or accelerating deep learning models on hardware architectures
Experience with frameworks such as PyTorch, Caffe2, TensorFlow, ONNX, TensorRT
Knowledge of GPU/CPU/AI hardware accelerator architectures
Pay and Benefits The base pay range is $70.67/hour to $208,000/year, with bonus, equity, and benefits. Individual compensation is determined by skills, qualifications, experience, and location. Compensation details listed reflect base pay only and do not include bonus, equity or sales incentives. Meta offers benefits and accommodations as applicable.
EEO and Accessibility Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based on race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity or expression, transgender status, age, protected veteran status, disability, or other legally protected characteristics. Meta participates in the E-Verify program where required. Meta may use AI/ML in the employment process. Meta is committed to providing reasonable accommodations for candidates with disabilities. If you need assistance, please contact accommodations-ext@fb.com.
#J-18808-Ljbffr