Apple Inc.
Natural Language Modeling Research Scientist, Siri
Apple Inc., Cupertino, California, United States, 95014
Natural Language Modeling Research Scientist, Siri
Cupertino, California, United States | Machine Learning and AI Description
We are seeking a candidate with a proven track record in applied ML research. Responsibilities include training large-scale language and multimodal models on distributed backends, deploying compact neural architectures such as transformers efficiently on devices, and learning policies that can be personalized to users in a privacy-preserving manner. Ensuring quality with an emphasis on fairness and model robustness is a key part of the role. You will collaborate closely with ML researchers, software engineers, hardware, and design teams cross-functionally. Your primary focus will be on enhancing conversation understanding capabilities through LLM and multimodal models, with an emphasis on enriching system safety experiences. Minimum Qualifications
5-7+ years of experience in Machine Learning, particularly in NLP, NLG, or Speech Hands-on experience training LLMs, adapting pre-trained LLMs for downstream tasks, and human alignment Proficiency with ML toolkits like PyTorch Strong programming skills in Python, C, and C++ Preferred Qualifications
Experience with JAX (preferred but not required) Note: The base pay range for this role is between $147,400 and $272,100, depending on skills, qualifications, experience, and location. Apple offers comprehensive benefits, including medical and dental coverage, retirement plans, stock programs, educational reimbursement, and more. This role may also be eligible for bonuses, commissions, or relocation assistance. Apple is an equal opportunity employer committed to diversity and inclusion. We encourage applicants from all backgrounds to apply.
#J-18808-Ljbffr
Cupertino, California, United States | Machine Learning and AI Description
We are seeking a candidate with a proven track record in applied ML research. Responsibilities include training large-scale language and multimodal models on distributed backends, deploying compact neural architectures such as transformers efficiently on devices, and learning policies that can be personalized to users in a privacy-preserving manner. Ensuring quality with an emphasis on fairness and model robustness is a key part of the role. You will collaborate closely with ML researchers, software engineers, hardware, and design teams cross-functionally. Your primary focus will be on enhancing conversation understanding capabilities through LLM and multimodal models, with an emphasis on enriching system safety experiences. Minimum Qualifications
5-7+ years of experience in Machine Learning, particularly in NLP, NLG, or Speech Hands-on experience training LLMs, adapting pre-trained LLMs for downstream tasks, and human alignment Proficiency with ML toolkits like PyTorch Strong programming skills in Python, C, and C++ Preferred Qualifications
Experience with JAX (preferred but not required) Note: The base pay range for this role is between $147,400 and $272,100, depending on skills, qualifications, experience, and location. Apple offers comprehensive benefits, including medical and dental coverage, retirement plans, stock programs, educational reimbursement, and more. This role may also be eligible for bonuses, commissions, or relocation assistance. Apple is an equal opportunity employer committed to diversity and inclusion. We encourage applicants from all backgrounds to apply.
#J-18808-Ljbffr