Note: By applying to this position you will have an opportunity to share your preferred working location from the following:
Bengaluru, Karnataka, India; Hyderabad, Telangana, India.
- Bachelor's degree or equivalent practical experience.
- 7 years of experience in data analytics, Trust and Safety, policy, cybersecurity, or a related field.
- Experience in presenting to technical stakeholders and executive leaders.
- Experience in content policy, anti-abuse, or customer support operations and in data analysis or coding (e.g., Python).
- Knowledge of international and geopolitical events and their impact on global companies.
- Ability to work through ambiguity in a changing and pressure environment.
- Excellent communication skills with the ability to work in cross-functional teams (e.g., stakeholders and executives).
- Excellent problem-solving and thinking skills with attention to detail.
The Trust and Safety team has the responsibility of protecting Google's users by ensuring their online safety by fighting abuse and badness across Google products.
In this role, you will develop and manage cross-functional relationships with Operations, Engineering, Product and Legal teams in a effort to improve processes for identifying and resolving policy and quality issues on Search and Generative AI. You will be responsible to analyze market trends, set forward-looking strategy, and implement solutions to solve user safety and quality issues to ensure a quality user experience across Search and Generative AI products. You will understand the user and use operational and technical acumen to protect the users. You will work globally and cross-functionally to navigate testing online safety situations and manage abuse and fraud cases. You will work with sensitive content or situations and may be exposed to graphic, controversial or upsetting topics or content.At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.
- Perform on-call responsibilities on a rotating basis, including weekend coverage, and make policy decisions and implement fixes.
- Ensure trust and reputation not only for Search and Generative AI, but also for Google as a brand and company by fighting abuse.
- Analyze data on past escalations to drive process, product, and policy improvements, and manage policy escalations processes across Search, News, Google Assistant and Generative AI.
- Analyze market trends, set forward-looking strategy, and implement solutions to solve user safety and quality issues, to ensure a quality user experience across Search and Generative AI products.
- Develop and manage cross-functional relationships with Engineering, Product, Legal and Operations teams.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.