This role is critical in supporting the company to deeply understand and mitigate how harmful content, including child sexual abuse material, manifests on our platform, as well as investigating complex threats, advancing our investigative capabilities, and developing innovative approaches to prevent harm to our users, particularly minors. This hire will report to the Threat Operations Manager and will work remotely in Australia.
This role involves exposure to graphic and/or objectionable content including but not limited to graphic images, videos and writings, offensive and derogatory language, and other potentially objectionable material, ie. child exploitation, graphic violence, self-injury, animal abuse, and other content which may be considered offensive or disturbing.
What you’ll be doing
Review and respond to sensitive content reports in our queues, including but not limited to the review of child exploitation, graphic violence, self-injury and suicide, explicit images, videos, and other objectionable and/or disturbing content. Ability to work 1-2 hours on Saturdays, and flexibility to pick up non-standard shifts such as early morning or evening, and weekend/ holiday shifts to support our global operations Investigate complex cases to develop a detailed understanding of how abuse is occurring and attribute it to the person(s), and/or networks responsible, in order to prepare high-quality written reports, for both internal and external audiences, including law enforcement agencies. Demonstrate operational excellence when evaluating risks, threats, and user privacy in time-critical situations and execute decision-making while analyzing a variety of factors that include imminence of danger, sensitivities, and/or graphic content Work collaboratively in responding to sensitive issues, providing deep knowledge into different exploitative content types and sharing insights and expertise about minor safety and exploitative content issues. Proactively identify currently undetected abuse by leveraging internal data, open-source intelligence, trusted partner information and third party private intelligence. Respond to users experiencing safety-related or high harm issues and empathetically address their concernsWhat you should have
1 - 2 years of experience in Trust and Safety moderation, including experience with child safety content review and removal Demonstrated ability to operate in a high tempo, sensitive environment while meeting specific SLAs. Exceptional communication skills with an ability to communicate complex information, concepts, or ideas in a confident and well-organized manner through verbal, written, and/or visual means.Bonus points
Proficiency with a second language (preferably Korean or Japanese) Tertiary qualifications or equivalent experience in Intelligence Studies, Cybersecurity, Criminal Justice, Criminology or related field. Experience in utilizing SQL, Python or other programming language for data manipulation.