Gicel Tomimbang 3L, Santa Clara Law
Observations from the SCU Content Moderation & Removal Conference
In-Sourcing v. Outsourcing to the Community
Internet service providers and social networking sites hosting user-generated content (“UGC”) routinely filter objectionable or illegal content. Though these efforts involve a combination of human labor aided by automated filters, human moderators—with their ability to exercise judgment—continue to serve an integral role in keeping the Internet safe.
Indeed, many companies are expanding their content moderation teams. YouTube, for example, recently announced that it plans to expand its content moderation team because “[h]uman reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content.” Likewise, Facebook pledged to increase its community operations team to “create global rules” that enable people “to communicate in a borderless way,” according to Monika Bickert, Head of Global Policy Management at Facebook.
UGC sites have adopted heterogeneous practices to implement their content moderation functions. For example, Facebook, YouTube, Google, and Pinterest have teams dedicated to reviewing UGC, while Reddit and Wikimedia Foundation use a mix of community volunteers and employees to screen UGC.
There are advantages to a dedicated in-house content moderation team. Company training programs and resources can reduce the guesswork of content moderation by guiding team members’ decision-making. Such a structure can help establish consistent standards for what constitutes “acceptable content” that site users can count on.
However, some panelists also made good arguments for relying on a mix of community volunteers and employees to moderate content.
First, content moderation is expensive. Delegating some work to the community can offload some strain on company resources.
Second, outsourcing content moderation to the community empowers users to take ownership of their online communities, much like neighborhood watch groups in real life. Indeed, community volunteers “sometimes have stricter standards than the legal minimum,” according to Jacob Rogers, Legal Counsel at Wikimedia Foundation. These more conservative legal standards, in turn, screen UGC that may be offensive to community members but might have been overlooked by in-house content moderation teams. Also, giving users the tools to moderate UGC on websites they frequent “engages the user community on important issues and builds support for rolling out new policies as users interact with each other repeatedly to inform policy and community governance,” according to Jessica Ashooh, Director of Policy at Reddit.
Transparency may conflict with regulatory obligations.
Many users lament the seeming arbitrariness of moderating decisions, and in response, they have demanded increased transparency.
While transparency is certainly important, companies must juggle competing interests. Alex Feerst, Head of Legal at Medium, points out that transparency could be at odds with users’ privacy interests as well as the privacy interests of third parties being discussed.
When providing transparency reports, companies can anonymize information to increase privacy. Even then, the company retains data in its database that could be linked back to a specific user. Feerst posits that “the risks of de-anonymization increase with time as better technologies come to exist,” so “[t]oday’s anonymous data set could easily be tomorrow’s repository of personally identifiable information.” Thus, while data anonymization protects personal information, it is not a guaranteed safeguard.
Just as privacy and other interests are not necessarily outweighed by transparency interests, transparency interests are not necessarily outweighed by privacy and other interests. Thus, to earn public trust for content moderation practices, UGC sites must consider the spectrum of user interests.