Logo
ByteDance

Research Scientist Graduate- (Foundation Model, Vision and Language) - 2025 Star

ByteDance, Seattle, Washington, us, 98127

Save Job

Research Scientist Graduate- (Foundation Model, Vision and Language) - 2025 Start (PhD)

Location: Seattle Team: Technology Employment Type: Regular Responsibilities: Welcome to the Doubao-Vision team, where we spearhead multi-modality foundation models on visual understanding and visual generation. Our mission is to solve the visual intelligence problem for AI. We conduct cutting-edge research on areas like vision and language, large vision models, and generative foundation models. The team is a mix of experienced research scientists and engineers, aiming to advance the research boundaries in foundation models and apply our technologies to our rich application scenarios, whereas a feedback loop is created to help further improve our foundation technologies. Join us in shaping the future of AI technologies and revolutionizing our product experience for global users. Key responsibilities include: Conduct cutting-edge research and development in computer vision and natural language processing, especially in the areas of multi-modality, vision and language, etc. Enhance multimodal understanding and reasoning (images and videos etc), throughout the entire development process, encompassing data acquisition, model evaluation, pre-training, SFT, reward modeling, and reinforcement learning, to bolster overall performance. Synthesize large-scale, high-quality multi-modal data through methods such as rewriting, augmentation, and generation to improve the abilities of foundation models in various stages (pretraining, SFT, RLHF). Investigate and implement robust evaluation methodologies to assess model performance at various stages (ranging from covering diverse multimodal skills to improving user preference alignment), unravel the underlying mechanisms and sources of their abilities, and utilize this understanding to drive model improvements. Qualifications: Minimum Qualifications: Research and engineering experience in one or more areas of computer vision and natural language processing, including but not limited to: Experience in multi-modal understanding, vision and language, such as multimodal pre-training, visual instruction tuning, alignment learning, and other related topics. Work with very large-scale datasets, and build very large-scale datasets to scale up foundation models. Experience with language models and apply them in various downstream tasks. Highly competent in algorithms and programming; Strong coding skills in Python and popular deep learning frameworks. Work and collaborate well with team members. Ability to work independently; Strong communication skills. Preferred Qualifications: Candidates with publications in top-tier venues such as CVPR, ECCV, ICCV, NeurIPS, ICLR, ICML, EMNLP, ACL, NAACL, etc Candidates with impactful open-source projects on GitHub and a demonstrated engineering ability to quickly solve new challenges. Compensation and Benefits: The base salary range for this position is $198360 - $416100 annually. Benefits may include medical, dental, and vision insurance, a 401(k) savings plan with company match, paid parental leave, short-term and long-term disability coverage, life insurance, wellbeing benefits, and more. ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. We are passionate about diversity and inclusion and hope you are too. Equal Employment Opportunity: ByteDance is an equal opportunity employer and welcomes applications from qualified candidates. We are committed to providing reasonable accommodations in our recruitment processes for candidates with disabilities, pregnancy, sincerely held religious beliefs or other reasons protected by applicable laws.

#J-18808-Ljbffr