Google Inc.
Overview
Advanced. Experience owning outcomes and decision making, solving ambiguous problems and influencing stakeholders; deep expertise in domain. Qualifications
Bachelor's degree or equivalent practical experience. 10 years of experience in trust and safety at a technology company. 7 years of experience in trust and safety, intelligence, security analysis, threat or risk management, geopolitical forecasting, or a related field. 1 year of experience working with genAI technologies. Preferred qualifications
Experience making business decisions, including identifying gaps or business needs, and innovating and scaling solutions. Experience in data analytics using statistical analysis and hypothesis testing. Experience analyzing Machine Learning (ML) models performance or working on Large Language Models (LLMs). Ability to think logically and work effectively in a constantly changing environment and to influence cross-functionally and cross-geographically with all levels of management. Ability to function well as a critical thinker in high pressure situations and take the lead as needed. Ability to manage multiple executive stakeholders, influencing safety strategy and operational execution at all levels. About the job
You will bring your critical thinking and leadership skills to analyze risks and opportunities presented by Generative Artificial Intelligence (GenAI) models and product features built on them to design a responsibility strategy that appropriately balances these risks and opportunities. You will need to synthesize the expertise and perspectives of cross- functional teams to enable thoughtful prioritization of issues, design a testing strategy as well as any risk mitigation measures that may be needed. This responsibility extends beyond launch, encompassing post-launch monitoring and analysis to ensure ongoing safety and address emerging trends. You will need to be a pragmatic thinker and an excellent communicator and influencer. Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting. To all recruitment agencies: Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes. Salary
The US base salary range for this full-time position is $160,000-$237,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process. Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google. Responsibilities
Deliver the safety strategy for generative AI model or product feature launches, in collaboration with executive stakeholders from Google DeepMind (GDM), Trust and Safety, legal, product teams, and more. Analyze and prioritize risks and opportunities, designing testing strategy, analyzing results and designing mitigations and driving post-launch monitoring. Act as a trusted partner in a changing environment, coordinating and providing a consolidated view of risks and mitigations across all launch pillars (e.g., policy, testing, features) to cross-functional partners and leadership. Perform on-call responsibilities on a rotating basis. Be comfortable to be exposed to graphic, controversial, and upsetting content. Google is proud to be an equal opportunity and affirmative action employer. We are committed to building a workforce that is representative of the users we serve, creating a culture of belonging, and providing an equal employment opportunity regardless of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), expecting or parents-to-be, criminal histories consistent with legal requirements, or any other basis protected by law. See also Google\'s EEO Policy, Know your rights: workplace discrimination is illegal, Belonging at Google, and How we hire. Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting. To all recruitment agencies: Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes.
#J-18808-Ljbffr
Advanced. Experience owning outcomes and decision making, solving ambiguous problems and influencing stakeholders; deep expertise in domain. Qualifications
Bachelor's degree or equivalent practical experience. 10 years of experience in trust and safety at a technology company. 7 years of experience in trust and safety, intelligence, security analysis, threat or risk management, geopolitical forecasting, or a related field. 1 year of experience working with genAI technologies. Preferred qualifications
Experience making business decisions, including identifying gaps or business needs, and innovating and scaling solutions. Experience in data analytics using statistical analysis and hypothesis testing. Experience analyzing Machine Learning (ML) models performance or working on Large Language Models (LLMs). Ability to think logically and work effectively in a constantly changing environment and to influence cross-functionally and cross-geographically with all levels of management. Ability to function well as a critical thinker in high pressure situations and take the lead as needed. Ability to manage multiple executive stakeholders, influencing safety strategy and operational execution at all levels. About the job
You will bring your critical thinking and leadership skills to analyze risks and opportunities presented by Generative Artificial Intelligence (GenAI) models and product features built on them to design a responsibility strategy that appropriately balances these risks and opportunities. You will need to synthesize the expertise and perspectives of cross- functional teams to enable thoughtful prioritization of issues, design a testing strategy as well as any risk mitigation measures that may be needed. This responsibility extends beyond launch, encompassing post-launch monitoring and analysis to ensure ongoing safety and address emerging trends. You will need to be a pragmatic thinker and an excellent communicator and influencer. Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting. To all recruitment agencies: Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes. Salary
The US base salary range for this full-time position is $160,000-$237,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process. Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google. Responsibilities
Deliver the safety strategy for generative AI model or product feature launches, in collaboration with executive stakeholders from Google DeepMind (GDM), Trust and Safety, legal, product teams, and more. Analyze and prioritize risks and opportunities, designing testing strategy, analyzing results and designing mitigations and driving post-launch monitoring. Act as a trusted partner in a changing environment, coordinating and providing a consolidated view of risks and mitigations across all launch pillars (e.g., policy, testing, features) to cross-functional partners and leadership. Perform on-call responsibilities on a rotating basis. Be comfortable to be exposed to graphic, controversial, and upsetting content. Google is proud to be an equal opportunity and affirmative action employer. We are committed to building a workforce that is representative of the users we serve, creating a culture of belonging, and providing an equal employment opportunity regardless of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), expecting or parents-to-be, criminal histories consistent with legal requirements, or any other basis protected by law. See also Google\'s EEO Policy, Know your rights: workplace discrimination is illegal, Belonging at Google, and How we hire. Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting. To all recruitment agencies: Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes.
#J-18808-Ljbffr