The adoption and proliferation of digital devices and digital record-keeping have amplified the ability to connect with others, driving social discourse, economic activity, and much more. But unfortunately, bad actors are also haunting these same spaces. As a result, fraud, disinformation, and objectionable material had grown in recent years, especially when COVID-19 shelter-in-place policies triggered increased usage of information systems and networks. So, online ghouls have seen this as an opportunity to optimize their abuse tactics and develop more sophisticated ways to use them. According to data from McAffee, cybercrime cost the world more than $1 trillion in 2020 — around 1% of global GDP. A whopping $945 billion were also lost due to cyber incidents.1
While no industry has escaped evolving fraud or pandemic-driven uncertainty, the figures point to one thing: there is an increasing urgency in developing Trust and Safety initiatives to mitigate malicious online content and behavior, preventing the potential specter that could negatively impact your brand and your users.
Best Practices for Ensuring Safer, Worry-Free Online Experiences
If people perceive an online platform, marketplace, or community to be unsafe, unreliable, or problematic, the incentive to participate is vastly reduced. But if an operator can provide trust and safety, both in terms of operation and perception, people reciprocate trust, resulting in a net positive for all stakeholders. According to an Edelman study, around 8 in 10 respondents (81% globally, 80% in the US) say that trusting a brand to do what is right is a deciding factor in a purchase decision, while 82% of US consumers and three-quarters of global respondents say they will continue to buy a brand they trust, even if another brand suddenly becomes trendy.2
As a foundation for long-term success, E-commerce pioneer, eBay, has developed some of the most effective approaches in addressing harmful activity on their site, reducing it by 50% over the last seven years.3 Meanwhile, video game developer and publisher Blizzard Entertainment cleansed “Overwatch” of antisocial behavior through several Trust and Safety features that lowered in-game toxicity by 40% in 2019.4
The following are Trust and Safety best practices that constitute a responsible approach to increasing trust in your brand and helping combat threats to your business:
1. Set the record straight through Community Guidelines.
Community Guidelines are a set of rules to ensure a standard of behavior expected on a platform to create a fair and safe place for users to transact or interact. These guidelines must be reviewed regularly and updated to address emerging trends and changes in the platform and the Trust and Safety landscape.
2. Listen to what your users have to say.
When people experience harmful behaviors, they tend to build resentment and eventually leave a platform. Hence, users need a way to communicate with the platform directly to make moderators aware of violations and inappropriate behaviors. The reporting process should be included in published guidelines and must be accessible to users.
3. Transparency is key!
Honest and open communication helps platforms build relationships with users. For example, publishing transparency reports, sharing statistics (including government requests for information), violation numbers and types, as well as appeals and restorations of content, show that user safety is being taken seriously.
3. Track your key performance indicators (KPIs).
KPIs are commonly used in different business areas to provide stakeholders with insights into processes and initiatives and align various departments with business goals. In like manner, defining and tracking Trust and Safety KPIs allow platforms to understand their current processes and measure the effectiveness of each initiative vis-à-vis their progress toward their goals.
4. Ensure a safe shopping experience.
Securing the network and infrastructure to systematically protect the platform and applying policies and procedures to protect individuals who use the platform are imperative to ongoing marketplace success. This holistic approach assures buyers and sellers that harmful activity within the platform remains in check and it’s safe to conduct business.
5. Protect advertisers to protect revenue.
When it comes to determining ad spending, trust-related attributes outweigh performance attributes 56% to 44%.5 Therefore, to keep advertisers' spending, it is important to audit content safety policies to ensure the overall environment is brand-suitable and adopt and promote privacy by default.
6. Moderator well-being should be a priority.
High turnover rates can plague content moderation without tools and policies to protect moderators from the lasting effects of exposure to disturbing content. Integrating AI moderation tools can weed out toxic material on the first pass, minimizing the queue and allowing human moderators to focus on more nuanced content. The result: increased job satisfaction, efficiency, and productivity. Support for moderator well-being can also include remote work policies, mental health counseling, and career development.
We help make communities a better place.
At Teleperformance, we believe that good onboarding experiences, robust security and privacy measures, proper due diligence, and careful monitoring of activities are solid advice for any business. That’s why we safeguard the entire business ecosystem via six critical areas: User-Generated Content Moderation, Ad Moderation and Monetization, E-commerce, Shopping, and Payment/Fraud, Application and Developer Support, Digital and Media Support, and Identity and Account Authenticity.
Click here to learn more about our Trust and Safety services and how we can help your business balance risk and revenue with the right mindset, technologies, and processes.