Nutanix
Research engineer: Generative AI model personalization and efficient fine-tuning
Nutanix, Santa Clara, California, us, 95053
Company:
Qualcomm Korea YH
Job Area: Engineering Group, Machine Learning Engineering
General Summary: We are making AI at edge devices, attend and assist people and learn from interaction without many labels, together with other devices in a federated way. We are based in Seoul and work together with our other team members in San Diego, Amsterdam, and Beijing.
We are looking for deep‑learning researchers interested in developing new theories and algorithms in the following areas, with a strong focus on Generative AI Models and its personalization:
Generative foundation models: LLM, LVM, LMM, VLA, etc.
Personalization and on‑device adaptation of generative models
Efficient fine‑tuning techniques (LoRA, adapters, parameter‑efficient tuning)
Multi‑modal generation (text‑to‑image, text‑to‑video, audio‑visual)
Diffusion / transformer models and their optimization
Knowledge distillation and compression for generative models
Quantization and low‑bit processing for LLM/LMM
Reinforcement Learning for Generative AI (RLHF, policy optimization, reward modeling)
Unsupervised / semi‑supervised / self‑supervised learning
Meta‑Learning / few‑shot learning / domain adaptation / knowledge transfer
The developed technologies will be deployed on mobile device applications and have a worldwide impact on on‑device learning algorithms using Qualcomm chips. Also, the research results are expected to be published in top‑tier conferences and journals.
Minimum Qualifications
Strong machine learning and deep learning knowledge
Recent research experience in Generative AI (LLM, LVM, LMM, VLA) and personalization techniques
Algorithm implementation experience using Python with deep learning platforms (e.g., PyTorch, TensorFlow)
Publication requirement: at least 4 papers as first author, including at least 2 papers in top conferences (NeurIPS, CVPR, ICML, ICCV, ICLR, ECCV, ACL)
Bachelor's degree in Computer Science, Engineering, Information Systems, or related field and 4+ years of relevant work experience OR Master's degree and 3+ years OR PhD and 2+ years of relevant work experience
Preferred Qualifications
Extensive and diverse experience with generative AI models, including LLMs, LVMs, LMMs, and multi‑modal architectures
Proven expertise in efficient adaptation techniques such as LoRA, adapters, and other parameter‑efficient fine‑tuning methods
Hands‑on experience with model optimization for edge deployment, including quantization, pruning, and low‑bit inference/training
Strong track record of publications in top‑tier conferences (NeurIPS, CVPR, ICML, ICCV, ICLR, ECCV, ACL)
Equal Opportunity & Accommodations Qualcomm is an equal‑opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, Qualcomm is committed to providing an accessible process. Please e‑mail disability-accomodations@qualcomm.com for assistance.
Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of company confidential information.
If you would like more information about this role, please contact Qualcomm Careers.
#J-18808-Ljbffr
Job Area: Engineering Group, Machine Learning Engineering
General Summary: We are making AI at edge devices, attend and assist people and learn from interaction without many labels, together with other devices in a federated way. We are based in Seoul and work together with our other team members in San Diego, Amsterdam, and Beijing.
We are looking for deep‑learning researchers interested in developing new theories and algorithms in the following areas, with a strong focus on Generative AI Models and its personalization:
Generative foundation models: LLM, LVM, LMM, VLA, etc.
Personalization and on‑device adaptation of generative models
Efficient fine‑tuning techniques (LoRA, adapters, parameter‑efficient tuning)
Multi‑modal generation (text‑to‑image, text‑to‑video, audio‑visual)
Diffusion / transformer models and their optimization
Knowledge distillation and compression for generative models
Quantization and low‑bit processing for LLM/LMM
Reinforcement Learning for Generative AI (RLHF, policy optimization, reward modeling)
Unsupervised / semi‑supervised / self‑supervised learning
Meta‑Learning / few‑shot learning / domain adaptation / knowledge transfer
The developed technologies will be deployed on mobile device applications and have a worldwide impact on on‑device learning algorithms using Qualcomm chips. Also, the research results are expected to be published in top‑tier conferences and journals.
Minimum Qualifications
Strong machine learning and deep learning knowledge
Recent research experience in Generative AI (LLM, LVM, LMM, VLA) and personalization techniques
Algorithm implementation experience using Python with deep learning platforms (e.g., PyTorch, TensorFlow)
Publication requirement: at least 4 papers as first author, including at least 2 papers in top conferences (NeurIPS, CVPR, ICML, ICCV, ICLR, ECCV, ACL)
Bachelor's degree in Computer Science, Engineering, Information Systems, or related field and 4+ years of relevant work experience OR Master's degree and 3+ years OR PhD and 2+ years of relevant work experience
Preferred Qualifications
Extensive and diverse experience with generative AI models, including LLMs, LVMs, LMMs, and multi‑modal architectures
Proven expertise in efficient adaptation techniques such as LoRA, adapters, and other parameter‑efficient fine‑tuning methods
Hands‑on experience with model optimization for edge deployment, including quantization, pruning, and low‑bit inference/training
Strong track record of publications in top‑tier conferences (NeurIPS, CVPR, ICML, ICCV, ICLR, ECCV, ACL)
Equal Opportunity & Accommodations Qualcomm is an equal‑opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, Qualcomm is committed to providing an accessible process. Please e‑mail disability-accomodations@qualcomm.com for assistance.
Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of company confidential information.
If you would like more information about this role, please contact Qualcomm Careers.
#J-18808-Ljbffr