Amazon Web Services (aws)
Machine Learning - Compiler Engineer II, AWS Neuron, Annapurna Labs
Amazon Web Services (aws), Cupertino, California, United States, 95014
Join to apply for the
Machine Learning - Compiler Engineer II, AWS Neuron, Annapurna Labs
role at
Amazon Web Services (AWS)
Please read the information in this job post thoroughly to understand exactly what is expected of potential candidates. Do you want to be part of AI revolution? At AWS our vision is to make deep learning pervasive for everyday developers and to democratize access to AI hardware and software infrastructure. In order to deliver on that vision, we’ve created innovative software and hardware solutions that make it possible. AWS Neuron is the SDK that optimizes the performance of complex ML models executed on AWS Inferentia and Trainium, our custom chips designed to accelerate deep-learning workloads. This role is for a software engineer in the Compiler team for AWS Neuron. As part of this role, you will be responsible for building next generation Neuron compiler which transforms ML models written in ML frameworks (e.g, PyTorch, TensorFlow, and JAX) to be deployed AWS Inferentia and Trainium based servers in the Amazon cloud. You will be responsible for solving hard compiler optimization problems to achieve optimum performance for variety of ML model families including massive scale large language models like Llama, Deepseek, and beyond as well as stable diffusion, vision transformers and multi-model models. Experience in object-oriented languages like C++/Java is a must, experience with compilers or building ML models using ML frameworks on accelerators (e.g., GPUs) is preferred but not required. Experience with technologies like OpenXLA, StableHLO, MLIR will be added bonus! Key job responsibilities Design, implement, test, deploy and maintain innovative software solutions to transform Neuron compiler’s performance, stability and user-interface. Work side by side with chip architects, runtime/OS engineers, scientists and ML Apps teams to seamlessly deploy state of the art ML models from our customers on AWS accelerators with optimal cost/performance benefits. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language Preferred Qualifications Master's degree or PhD in Computer Science, or a related technical field. 3+ years of experience writing production grade code in object-oriented languages such as C++/Java. Experience in compiler design for CPU/GPU/Vector engines/ML-accelerators. Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
#J-18808-Ljbffr
Machine Learning - Compiler Engineer II, AWS Neuron, Annapurna Labs
role at
Amazon Web Services (AWS)
Please read the information in this job post thoroughly to understand exactly what is expected of potential candidates. Do you want to be part of AI revolution? At AWS our vision is to make deep learning pervasive for everyday developers and to democratize access to AI hardware and software infrastructure. In order to deliver on that vision, we’ve created innovative software and hardware solutions that make it possible. AWS Neuron is the SDK that optimizes the performance of complex ML models executed on AWS Inferentia and Trainium, our custom chips designed to accelerate deep-learning workloads. This role is for a software engineer in the Compiler team for AWS Neuron. As part of this role, you will be responsible for building next generation Neuron compiler which transforms ML models written in ML frameworks (e.g, PyTorch, TensorFlow, and JAX) to be deployed AWS Inferentia and Trainium based servers in the Amazon cloud. You will be responsible for solving hard compiler optimization problems to achieve optimum performance for variety of ML model families including massive scale large language models like Llama, Deepseek, and beyond as well as stable diffusion, vision transformers and multi-model models. Experience in object-oriented languages like C++/Java is a must, experience with compilers or building ML models using ML frameworks on accelerators (e.g., GPUs) is preferred but not required. Experience with technologies like OpenXLA, StableHLO, MLIR will be added bonus! Key job responsibilities Design, implement, test, deploy and maintain innovative software solutions to transform Neuron compiler’s performance, stability and user-interface. Work side by side with chip architects, runtime/OS engineers, scientists and ML Apps teams to seamlessly deploy state of the art ML models from our customers on AWS accelerators with optimal cost/performance benefits. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language Preferred Qualifications Master's degree or PhD in Computer Science, or a related technical field. 3+ years of experience writing production grade code in object-oriented languages such as C++/Java. Experience in compiler design for CPU/GPU/Vector engines/ML-accelerators. Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
#J-18808-Ljbffr