Google
Join to apply for the
Product Policy Lead, Generative AI
role at
Google The Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. About The Job
On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety. Responsibilities
Analyze issues facing products with Generative AI capabilities and make policy recommendations on how to address them. Collaborate with Product Managers, Trust and Safety teams, and Engineers to influence product decisions and prioritization, and improve user experience for multimodal Generative AI. Drive research and collaboration in the multimodal Generative AI space both within Google and with key opinion formers through our Government Affairs and Public Policy team to set industry standards. Provide clear and timely updates to executives and executive stakeholders both within Trust and Safety and more broadly within Google on issues related to multimodal Generative AI and base model policies. Work with sensitive content or situations and may be exposed to graphic, controversial or upsetting topics or content. Minimum Qualifications
Bachelor's degree or equivalent practical experience. 7 years of experience in a Policy, Legal, Trust and Safety, or Technology Environment. Experience working on AI-related policy issues. Preferred Qualifications
JD, MBA, or Master’s degree. Experience in development, implementation, and maintenance of policy. Experience working on content issues and potentially harmful or upsetting content, including expertise in the technology sector and key policy issues impacting AI safety and content moderation online. Ability to translate complex issues into simple and clear language, collaborate with cross-functional stakeholders, and navigate organizational boundaries. Ability to communicate effectively in person, in public settings, and in writing, and identify/gather insights and communicate complex technology policy issues. Excellent problem-solving and critical thinking skills with attention to detail. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements.
#J-18808-Ljbffr
Product Policy Lead, Generative AI
role at
Google The Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. About The Job
On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety. Responsibilities
Analyze issues facing products with Generative AI capabilities and make policy recommendations on how to address them. Collaborate with Product Managers, Trust and Safety teams, and Engineers to influence product decisions and prioritization, and improve user experience for multimodal Generative AI. Drive research and collaboration in the multimodal Generative AI space both within Google and with key opinion formers through our Government Affairs and Public Policy team to set industry standards. Provide clear and timely updates to executives and executive stakeholders both within Trust and Safety and more broadly within Google on issues related to multimodal Generative AI and base model policies. Work with sensitive content or situations and may be exposed to graphic, controversial or upsetting topics or content. Minimum Qualifications
Bachelor's degree or equivalent practical experience. 7 years of experience in a Policy, Legal, Trust and Safety, or Technology Environment. Experience working on AI-related policy issues. Preferred Qualifications
JD, MBA, or Master’s degree. Experience in development, implementation, and maintenance of policy. Experience working on content issues and potentially harmful or upsetting content, including expertise in the technology sector and key policy issues impacting AI safety and content moderation online. Ability to translate complex issues into simple and clear language, collaborate with cross-functional stakeholders, and navigate organizational boundaries. Ability to communicate effectively in person, in public settings, and in writing, and identify/gather insights and communicate complex technology policy issues. Excellent problem-solving and critical thinking skills with attention to detail. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements.
#J-18808-Ljbffr