Intel
Job Details:
Job Description:
Overview
We are seeking a highly skilled Compiler Engineer with experience in MLIR (Multi-Level Intermediate Representation) and performance-critical code generation. The ideal candidate will focus on designing and implementing compiler infrastructure to generate high-performance kernels for AI, and machine learning. This role bridges advanced compiler technology with systems optimization, enabling domain-specific performance across heterogeneous architectures (GPUs and accelerators).
Responsibilities:
Compiler Development and Optimization
Design and implement MLIR-based compiler passes for lowering, optimization, and code generation
Build domain-specific dialects to represent compute kernels at multiple abstraction levels
Develop performance-tuned transformation pipelines targeting
vectorization, parallelization, and memory locality
High-Performance Kernel Generation
Generate and
optimize kernels for linear algebra, convolution, and other math-intensive primitives
Ensure cross-target portability while achieving near hand-tuned performance
Collaborate with hardware teams to integrate backend-specific optimizations
Performance Engineering
Profile generated code and identify performance bottlenecks across architectures
Implement optimizations for cache utilization, prefetching, and scheduling
Contribute to auto-tuning strategies for workload-specific performance
Collaboration and Research
Work closely with ML researchers, system architects, and runtime engineers to co-design kernel generation strategies
Stay up to date with developments in MLIR, LLVM, and compiler technologies
Publish or contribute to open-source MLIR/LLVM communities where appropriate
Qualifications:
Minimum qualifications are required to be initially considered for this position. Preferred qualifications are in addition to the minimum requirements and are considered a plus factor in identifying top candidates.
Minimum Qualifications:
Bachelor's and 7+ years of experience
OR
Master's degree and 4+ years of experience
OR
PhD degree and 2+ years of experience. The degree should be in Computer Science, Computer Engineering, Software Engineering, or related field
The experience must include experience in/with:
Compiler design and optimization (MLIR, LLVM, or equivalent)
Code generation and transformation passes
High-performance computing techniques: vectorization, loop optimizations, polyhedral transformations, and memory hierarchy optimization
Familiarity with machine learning workloads (e.g., matrix multiplications, convolutions)
Preferred Qualifications:
Hands-on experience extending MLIR dialects or contributing to the MLIR ecosystem
Background in GPU programming models (CUDA, ROCm, SYCL) or AI accelerators
Knowledge of numerical linear algebra libraries (BLAS, cuDNN, MKL) and their performance characteristics
Experience with auto-tuning frameworks (e.g., TVM, Halide, Triton)
Track record of publications, patents, or contributions to open-source compiler projects
Job Type:
Experienced Hire
Shift:
Shift 1 (United States of America)
Primary Location:
US, Oregon, Hillsboro
Additional Locations:
US, California, San Jose
Business group:
The Software and AI (SAI) Team drives customer value by enabling differentiated experiences through leadership AI technologies and foundational software stacks, products, and services. The group is responsible for developing the holistic strategy for client and data center software in collaboration with OSVs, ISVs, developers, partners and OEMs. The group delivers specialized NPU IP to enable the AI PC and GPU IP to support all of Intel's market segments. The group also has HW and SW engineering experts responsible for delivering IP, SOCs, runtimes, and platforms to support the CPU and GPU/accelerator roadmap, inclusive of integrated and discrete graphics.
Posting Statement:
All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.
Position of Trust
N/A
Benefits:
We offer a total compensation package that ranks among the best in the industry. It consists of competitive pay, stock, bonuses, as well as, benefit programs which include health, retirement, and vacation. Find more information about all of our Amazing Benefits here:
https://intel.wd1.myworkdayjobs.com/External/page/1025c144664a100150b4b1665c750003
Annual Salary Range for jobs which could be performed in the US:
$179,710.00-$253,700.00
S
al
ary
range
dependent on a number of factors including location and experience.
Work Model for this Role
This role will require an on-site presence. * Job posting details (such as work model, location or time type) are subject to change.
Job Description:
Overview
We are seeking a highly skilled Compiler Engineer with experience in MLIR (Multi-Level Intermediate Representation) and performance-critical code generation. The ideal candidate will focus on designing and implementing compiler infrastructure to generate high-performance kernels for AI, and machine learning. This role bridges advanced compiler technology with systems optimization, enabling domain-specific performance across heterogeneous architectures (GPUs and accelerators).
Responsibilities:
Compiler Development and Optimization
Design and implement MLIR-based compiler passes for lowering, optimization, and code generation
Build domain-specific dialects to represent compute kernels at multiple abstraction levels
Develop performance-tuned transformation pipelines targeting
vectorization, parallelization, and memory locality
High-Performance Kernel Generation
Generate and
optimize kernels for linear algebra, convolution, and other math-intensive primitives
Ensure cross-target portability while achieving near hand-tuned performance
Collaborate with hardware teams to integrate backend-specific optimizations
Performance Engineering
Profile generated code and identify performance bottlenecks across architectures
Implement optimizations for cache utilization, prefetching, and scheduling
Contribute to auto-tuning strategies for workload-specific performance
Collaboration and Research
Work closely with ML researchers, system architects, and runtime engineers to co-design kernel generation strategies
Stay up to date with developments in MLIR, LLVM, and compiler technologies
Publish or contribute to open-source MLIR/LLVM communities where appropriate
Qualifications:
Minimum qualifications are required to be initially considered for this position. Preferred qualifications are in addition to the minimum requirements and are considered a plus factor in identifying top candidates.
Minimum Qualifications:
Bachelor's and 7+ years of experience
OR
Master's degree and 4+ years of experience
OR
PhD degree and 2+ years of experience. The degree should be in Computer Science, Computer Engineering, Software Engineering, or related field
The experience must include experience in/with:
Compiler design and optimization (MLIR, LLVM, or equivalent)
Code generation and transformation passes
High-performance computing techniques: vectorization, loop optimizations, polyhedral transformations, and memory hierarchy optimization
Familiarity with machine learning workloads (e.g., matrix multiplications, convolutions)
Preferred Qualifications:
Hands-on experience extending MLIR dialects or contributing to the MLIR ecosystem
Background in GPU programming models (CUDA, ROCm, SYCL) or AI accelerators
Knowledge of numerical linear algebra libraries (BLAS, cuDNN, MKL) and their performance characteristics
Experience with auto-tuning frameworks (e.g., TVM, Halide, Triton)
Track record of publications, patents, or contributions to open-source compiler projects
Job Type:
Experienced Hire
Shift:
Shift 1 (United States of America)
Primary Location:
US, Oregon, Hillsboro
Additional Locations:
US, California, San Jose
Business group:
The Software and AI (SAI) Team drives customer value by enabling differentiated experiences through leadership AI technologies and foundational software stacks, products, and services. The group is responsible for developing the holistic strategy for client and data center software in collaboration with OSVs, ISVs, developers, partners and OEMs. The group delivers specialized NPU IP to enable the AI PC and GPU IP to support all of Intel's market segments. The group also has HW and SW engineering experts responsible for delivering IP, SOCs, runtimes, and platforms to support the CPU and GPU/accelerator roadmap, inclusive of integrated and discrete graphics.
Posting Statement:
All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.
Position of Trust
N/A
Benefits:
We offer a total compensation package that ranks among the best in the industry. It consists of competitive pay, stock, bonuses, as well as, benefit programs which include health, retirement, and vacation. Find more information about all of our Amazing Benefits here:
https://intel.wd1.myworkdayjobs.com/External/page/1025c144664a100150b4b1665c750003
Annual Salary Range for jobs which could be performed in the US:
$179,710.00-$253,700.00
S
al
ary
range
dependent on a number of factors including location and experience.
Work Model for this Role
This role will require an on-site presence. * Job posting details (such as work model, location or time type) are subject to change.