SupportFinity™
Child Safety Analyst, Responsible AI Testing
SupportFinity™, Washington, District of Columbia, us, 20022
Note: By applying to this position you will have an opportunity to share your preferred working location from the following:
Washington D.C., DC, USA; Austin, TX, USA . Minimum qualifications: Bachelor's degree in Political Science, Communications, Computer Science, Data Science, History, International Affairs, Social Work, Child Development, related discipline, or equivalent professional experience. 4 years of experience in Trust and Safety Operations, data analytics, policy, cybersecurity, product policy, privacy and security, legal, compliance, risk management, intel, content moderation, AI testing or other relevant environment. Preferred qualifications: Master's degree. Experience with machine learning. Experience in SQL, building dashboards, data collection/transformation, visualization/dashboards, or in a scripting/programming language (e.g., Python). Strong understanding of AI systems, machine learning, and their potential risks. Excellent communication and presentation skills (written and verbal) and the ability to influence cross-functionally at various levels. About The Job Trust & Safety team members are tasked with identifying and addressing major challenges to the safety and integrity of our products. They utilize technical expertise, problem-solving skills, user insights, and proactive communication to protect users and partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. You are a strategic thinker and team-player with a passion for doing what’s right, working globally and cross-functionally to identify and combat abuse and fraud swiftly, promoting trust and ensuring user safety. As an Analyst on the Trust and Safety Responsible AI Child Safety Testing Team, you will specialize in structured and unstructured safety pre-launch testing for Google's GenAI models and products, focusing on online child abuse and exploitation risks. You will collaborate with technical experts to develop and implement testing protocols, leverage data analysis for actionable insights, and manage multiple stakeholders efficiently. Your goal is to ensure AI products do not generate unsafe content related to children. The US base salary range for this full-time position is $110,000-$157,000 + bonus + equity + benefits. Salary ranges depend on role, level, and location, with individual pay influenced by skills, experience, and education. Your recruiter can provide specific salary details during the hiring process. Compensation listed reflects base salary only; bonus, equity, and benefits are additional. Learn more about benefits at Google. Responsibilities Own and lead structured pre-launch child safety testing for Google’s prominent GenAI products. Define and execute prompt generation strategies to test product compliance, collaborating with RAI Testing Sustainability and Data Science teams, leveraging LLM-based prompt tools, and providing clear instructions to vendors. Collaborate with product teams to scrape responses, providing consultation on scalable scraping solutions, accessing models/UI, and instructing vendors. Perform prompt/response rating against standards, providing clear instructions, clarifying gray areas, and calibrating quality. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, religion, sex, national origin, sexual orientation, age, disability, gender identity, or Veteran status. We consider qualified applicants regardless of criminal histories, as permitted by law. For accommodations, please complete our Accommodations for Applicants form.
#J-18808-Ljbffr
Washington D.C., DC, USA; Austin, TX, USA . Minimum qualifications: Bachelor's degree in Political Science, Communications, Computer Science, Data Science, History, International Affairs, Social Work, Child Development, related discipline, or equivalent professional experience. 4 years of experience in Trust and Safety Operations, data analytics, policy, cybersecurity, product policy, privacy and security, legal, compliance, risk management, intel, content moderation, AI testing or other relevant environment. Preferred qualifications: Master's degree. Experience with machine learning. Experience in SQL, building dashboards, data collection/transformation, visualization/dashboards, or in a scripting/programming language (e.g., Python). Strong understanding of AI systems, machine learning, and their potential risks. Excellent communication and presentation skills (written and verbal) and the ability to influence cross-functionally at various levels. About The Job Trust & Safety team members are tasked with identifying and addressing major challenges to the safety and integrity of our products. They utilize technical expertise, problem-solving skills, user insights, and proactive communication to protect users and partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. You are a strategic thinker and team-player with a passion for doing what’s right, working globally and cross-functionally to identify and combat abuse and fraud swiftly, promoting trust and ensuring user safety. As an Analyst on the Trust and Safety Responsible AI Child Safety Testing Team, you will specialize in structured and unstructured safety pre-launch testing for Google's GenAI models and products, focusing on online child abuse and exploitation risks. You will collaborate with technical experts to develop and implement testing protocols, leverage data analysis for actionable insights, and manage multiple stakeholders efficiently. Your goal is to ensure AI products do not generate unsafe content related to children. The US base salary range for this full-time position is $110,000-$157,000 + bonus + equity + benefits. Salary ranges depend on role, level, and location, with individual pay influenced by skills, experience, and education. Your recruiter can provide specific salary details during the hiring process. Compensation listed reflects base salary only; bonus, equity, and benefits are additional. Learn more about benefits at Google. Responsibilities Own and lead structured pre-launch child safety testing for Google’s prominent GenAI products. Define and execute prompt generation strategies to test product compliance, collaborating with RAI Testing Sustainability and Data Science teams, leveraging LLM-based prompt tools, and providing clear instructions to vendors. Collaborate with product teams to scrape responses, providing consultation on scalable scraping solutions, accessing models/UI, and instructing vendors. Perform prompt/response rating against standards, providing clear instructions, clarifying gray areas, and calibrating quality. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, religion, sex, national origin, sexual orientation, age, disability, gender identity, or Veteran status. We consider qualified applicants regardless of criminal histories, as permitted by law. For accommodations, please complete our Accommodations for Applicants form.
#J-18808-Ljbffr