GGWP recently had Trust & Safety vet, Joi Podgorny, head to two T&S conferences in Europe to represent us, with a goal of assessing the current state of the industry. Here are some of her take-aways:
Gaming leads the way
Because of the social nature of multiplayer games, the gaming industry has been at the forefront of Trust & Safety for decades. Some of the most inspiring and data-driven stories are coming out of gaming operations, including pro-social research, analytics tying revenue gains to reduction of toxicity and positive impacts from incorporating safety by design methods early into product development.
Moderation is just one part
When most people think about Trust & Safety, they just think of moderation – be it with dozens (or hundreds) of humans or with AI models assisting. But while moderation is an important part of Trust & Safety, it is also policy management, user engagement, safety by design concerns with product/engineering, reputation and behavior analysis and reports/appeals. And then all of that has to be analyzed and synthesized for compliance, transparency reports and internal stakeholders. So the moderation of content is just a piece of the larger Trust & Safety pie.
Humans are still integral
There is so much promise with the addition of LLMs and machine learning models on many safety platforms. One of those promises (or threats, according to some pundits) is the elimination of the costly headcount that has been associated with moderation for decades. AI can help us reduce or even remove the need for some of the brutal, repetitive jobs humans have historically done in moderation – not to mention the ability to scale and process content.
But the reality is that specialized staff will still be needed to manage the tech and more. We still need people to help develop and manage community guidelines, handle policy enforcement, quality assurance and more. Thankfully many of the current safety platforms have experienced teams to assist with that, so brands don’t have to have those teams all in house.
Censorship vs safe spaces
Another hot topic that Trust & Safety professionals are grappling with is the recent rally cry of censorship coming not only from users, but some governments as well. The industry has weathered these positioning storms before, but these times feel more volatile.
As they have in the past, wherever the T&S professionals sit within their organization, they will lend strength to the strategists who are going head to head with newly amplified talking heads, stay steadfast in their policy enforcement workflows and compromise where they need to. Good safety tools are made to be customizable to different brands’ needs, so we are prepared. Ethics are at the foundation of Trust & Safety (or should be) and protecting the communities we care about will remain the guiding star.
Penalties are coming
The new slate of international regulations have had all of us in Trust & Safety refreshing our feeds in anticipation of more details. Many of the UK and EU regulators have been working on providing materials to help explain the intentions and logistics of following the new laws, including templates for reports, promises for more refined explanations to come and pipelines to ask questions.
Many in the industry feel as if we are just waiting for the sanctions and fines to start so we can move to the next chapter of enforcement. There was a recent sanction on the parent company of OnlyFans that coincided with the comprehensive launch date for OSA, even though the violation was for another law altogether. Some of us remember the early days of COPPA, where brands were hesitant to invest in compliance until the fines (and associated press releases) started. Hopefully brands managing communities have lived and learned enough to know that building safety into their products earlier is better, but we all know there are those who will be more inspired to act from punitive measures instead.
More vendors = more options
It’s natural to want less competition so your product is the only option for decision makers to choose. But the reality is that more competition just means a greater need to highlight differentiation, quality and results.
You can’t just say that you have AI models. Why are yours different? What other features can you offer brands? What results can you point to and help reassure potential customers that you are the best choice for their specific brand to help them mitigate the arms race of risk that is managing user generated content?
At GGWP, our main differentiator is that we are a full scope safety platform with best in class performance, enabling our customers to have one tool to manage their communication moderation needs, instead of having to patch multiple internal and external tools sets. This allows a brand’s developers to focus on building their core product, rather than the tools to keep their users safe.
At the end of the day, we are all trying to make the connected spaces we inhabit safer and more enjoyable for the users who are drawn to them. And there are so many spaces and so many users, so, to quote the cultural zeitgeist of Severance – our work remains mysterious and important.