Logo
ByteDance

Student Researcher (Doubao (Seed) - LLM Foundation Research)- 2025 Summer/Fall/W

ByteDance, San Jose, California, United States, 95199

Save Job

Student Researcher (Doubao (Seed) - LLM Foundation Research)- 2025 Summer/Fall/Winter (PhD)

1 week ago Be among the first 25 applicants Get AI-powered advice on this job and more exclusive features. Pay found in job post

Retrieved from the description. Base pay range

$60.00/hr - $75.00/hr Responsibilities

Team Introduction

This position is responsible for researching and building the company's LLMs. The role involves exploring new applications and solutions for related technologies in areas such as search, recommendation, advertising, content creation, and customer service. The goal is to meet the increasing demand for intelligent interactions from users and to significantly enhance their lifestyle and communication in the future.

The Student Researcher position provides unique opportunities that go beyond the constraints of our standard internship program, allowing for flexibility in duration, time commitment, and location of work. The Student Researcher program offers a flexible format that can accommodate both Onsite and Remote arrangements, as well as Part-Time or Full-Time commitments, depending on the needs of the project and the researcher.

We are looking for talented individuals to join us for a Student Researcher opportunity in 2025. Student Researcher opportunities aim to offer students industry exposure and hands-on experience. Turn your ambitions into reality as your inspiration brings infinite opportunities.

Candidates can apply to a maximum of two positions and will be considered for jobs in the order you apply. The application limit is applicable to TikTok and its affiliates' jobs globally. Applications will be reviewed on a rolling basis - we encourage you to apply early.

Responsibilities:

LLMPost Training Reinforcement Learning from Human Feedback: Design advanced reinforcement learning algorithms for large language models by integrating technologies such as heuristic-guided search, multi-agent reinforcement learning, and other related techniques. Reward modeling: Formulate a novel reward modeling methodology aimed at significantly enhancing robustness, improving generalization capabilities, and increasing overall accuracy. Scalable Oversight: Develop scalable oversight mechanisms that enable efficient monitoring and control of LLMs as they grow in size and complexity, ensuring consistent alignment with predefined objectives. Interpretability in LLMs: Focus on enhancing the interpretability of language models, ensuring that their decision-making processes and outputs are transparent and understandable to users and stakeholders. LLM Horizon Reasoning and planning for foundation models. Enhance reasoning and planning throughout the entire development process, encompassing data acquisition, model evaluation,pretraining, SFT, reward modeling, and reinforcement learning, to bolster overall performance. Synthesize large-scale, high-quality (multi-modal) data through methods such as rewriting, augmentation, and generation to improve the abilities of foundation models in various stages (pretraining, SFT, RLHF). Solve complex tasks via system 2 thinking, leverage advanced decoding strategies such as MCTS, A*. Investigate and implement robust evaluation methodologies to assess model performance at various stages, unravel the underlying mechanisms and sources of their abilities, and utilize this understanding to drive model improvements. Teach foundation models to use tools, interact with APIs and code interpreters. Build agents and multi-agents to solve complex tasks.

Qualifications

Minimum Qualifications

Currently enrolled in a PhD degree in Computer Science, Linguistics, Statistics, or related technical field.

Excellent knowledge of theory and practice of Large Language Models, Reinforcement Learning, Natural Language Processing, Machine Learning.

Strong publication record at leading conferences (NeurIPS, ICML, ACL, EMNLP etc.). Excellent coding ability, familiar with data structures, and fundamental algorithm skills, proficient in Python, winners of competitions such as ACM/ICPC, USACO/NOI/IOI, Top Coder, Kaggle, etc. are preferred. Good communication and collaboration skills, able to explore new technologies with the team and promote technological progress.

Preferred Qualifications

Demonstrated Deep Reinforcement Learning or Natural Language Processing, Machine Learning experience from previous internships, work experience, coding competitions, or publications. High levels of creativity and quick problem-solving capabilities.

Inspiring creativity is at the core of ByteDance's mission. Our innovative products are built to help people authentically express themselves, discover and connect – and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and enrich life - a mission we work towards every day.

As ByteDancers, we strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. By constantly iterating and fostering an "Always Day 1" mindset, we achieve meaningful breakthroughs for ourselves, our Company, and our users. When we create and grow together, the possibilities are limitless. Join us.

Diversity & Inclusion

ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too. Seniority level

Seniority level Internship Employment type

Employment type Internship Job function

Job function Research, Analyst, and Information Technology Industries Software Development Referrals increase your chances of interviewing at ByteDance by 2x Life Science Research Assistant - HLA Lab

Student Researcher (Foundation Model - Speech & Audio) - 2025 Start (PhD)

Fall Co-Op INTERN - MSAT WT & CV , Fremont CA

Student Researcher (Doubao (Seed) - Foundation Model - Vision Generative AI) - 2025 Start (PhD)

Student Researcher (Doubao (Seed) - Machine Learning System) - 2025 Start (PhD)

Student Researcher (Doubao (Seed) - Foundation Model - Generative AI) - 2025 Start (PhD)

Research Intern - Volumetric Representation (PhD, Fall 2025)

Student Researcher (Doubao (Seed) - Foundation Model AI Platform) - 2025 Start (PhD)

Clinical Neurotechnology Research Assistant

Assistant Clinical and Neuroimaging Research Coordinator

Student Researcher (Doubao (Seed) - Music Foundation Model) - 2025 Start (PhD)

Life Sciences Research Assistant – Stem Cell Biology and Neuroscience

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr