Logo
Bose

Embedded Machine Learning Intern Model Compression & Deployment

Bose, Framingham, Massachusetts, us, 01704

Save Job

Embedded Machine Learning Intern

The Engineering team at Bose is a thriving, passionate, deeply skilled team of professionals from a broad range of disciplines and experiences, who share a common goalto create products that provide transformative sound experiences. We're looking for an Embedded Machine Learning Intern to join the Corporate Research Team in Model Compression & Deployment. In this role, you will: Design and implement compiler-level optimizations and mapping strategies to efficiently deploy deep learning models on embedded platforms with neural network accelerators. Develop and optimize sparse kernel implementations tailored to target hardware, focusing on performance, memory efficiency, and energy savings. Build and evaluate machine learning model mappers that translate high-level models into hardware-executable formats. Collaborate cross-functionally to integrate ML workloads into embedded systems, ensuring end-to-end functionality. Stay up-to-date and experiment with the latest research in sparse computation, model optimization, and deployment frameworks. To be successful in this role, you should be/have: Currently pursuing an M.S. or Ph.D. in Computer Science, Electrical Engineering, Computer Engineering, or a related field. Solid programming background with 3+ years of experience in C/C++ and Python. Strong experience with machine learning frameworks (e.g., PyTorch, TensorFlow) and compiler stacks (e.g., TVM, MLIR, XLA). Hands-on experience in at least one of the following: Sparse kernel development (CPU, GPU, DSP, or NPU) Model-to-hardware mapping and deployment Embedded system programming and runtime optimization Familiarity with techniques such as pruning, quantization, and graph-level model transformation. Strong problem-solving and system-level thinking abilities. Preferred Qualifications: Proven experience through internships, research, or projects involving ML compiler optimization or hardware-software co-design. Knowledge of ML deployment on resource-constrained devices such as microcontrollers or DSPs. Familiarity with digital signal processing and audio-based ML applications. Publications or open-source contributions related to model optimization, hardware-aware ML, or embedded AI. (e.g., ICLR, ISCA, ICASSP, INTERSPEECH) Bose is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, genetic information, national origin, age, disability, veteran status, or any other legally protected characteristics. Bose is committed to providing reasonable accommodations to individuals with disabilities. If you require reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please direct your inquiries to applicant_disability_accommodationrequest@bose.com. Please include "Application Accommodation Request" in the subject of the email.