top of page
Blue Engine
TheRoad Logo

TheRoad

Product Management. Hands On Consulting.

  • Writer's pictureYoel Frischoff

Decentralized Content Moderation as a Service

Updated: Jul 3, 2023




How Can Social Media Thrive in a World of Misinformation
Not only to Society, but to social platforms as well
Distributed Content Moderation model

 

State of Content Moderation


Need

For better or worse, social networks have seized the throne of public opinion from established media.Thoroughness, corroboration, fact checking, and even identifiable opinion - have all given way to immediacy.


Although this process had started, and still goes on, in the era of 24/7 live emissions in traditional mass media, the immediacy trend has accelerated by orders of magnitude over social media - as these gained popularity and conquered a significant share of the waking hours of the population worldwide.


The immediacy phenomenon was accompanied by democratisation of expression. This democratization, blessed in itself, brought to the public sphere a multitude of voices and sentiments, some of them new, some old, previously suppressed.


These voices can be only occasionally identiffiable and traceable, while the very nature of social networks allow anonymity, deniability, and help obscure the source of information.


The dissemination mechanisms built in social media, on the other hand, help strengthen messages at lightning speed, and importantly, with little control.


These characteristics, contributing to the rise of social media as king of societal debate, have given rise to a number of threats:

  • Impersonation

  • Bots and fake entities

  • Fake News

  • Propaganda

  • Predatory marketing

  • Incendiary

  • Human trafficking

  • Child Porn

The list does actually go on and on, changing also by political culture.


 

Means


Although Social media - the darlings of innovation - got away at first with adverse publications by subscribers, they did - and still do - face a double front:


  • Regulation builds up to enforce content moderation

  • Advertisers shy away from controversial content, lest their image and sales are hurt


Reluctantly, social networks have embraced content moderation.


An army of thousands of content moderators, assisted by in-house and third party technologies is employed to monitor content and users.

 

Cost

The industry bears a load of ~$8 billion in 2021, growing fast at 9.3% CAGR. In its manual form, it is one of the major cost centers for social media operators, heavily bearing on the bottom line, as attests the abrupt layoff of practically all of Twitter’s moderation team, from the VP and down.


The human cost associated with painstakingly poring over abusive, and sometimes extremely disturbing content is also worth mentioning, with teams and veterans reporting strain and post traumatic disorders.

 

Criticism

These moderation efforts are currently siloed between each media platform, with some drawbacks:

  • Arbitrary and opaque policies, both on indentification and containment

  • Significant costs to the organization

  • Animousity of certain political factions

  • Forceful influence by authoritarian regimes (and not only them)


In a way, social networks are caught between the increasing barrage of abuse and the unthankful task of moderation, costing in the billions while fending claims of freedom-of-speech breaches or political bias.


Elon musk, for one, is on a steep learning curve, it seems: A twitter troll gone owner, he is discovering the implications of his actions on an abstract being that depends on the good will and trust of so many stakeholders - from users across the political spectrum, to advertisers, to regulators. He is defintiely for a ride!

 

Challenge


There is a challenge for social media as multinational, commercial, entities to monitor the online discourse - as there is an intrinsic tension between censorship and free speech, between political interests and free speech.


The challenge at hand is to help the social media industry, and the society as a whole, to have a content moderation capabilities, that would be

  • Impartial and trustworthy

  • Timely and scaleable

  • Cost effective

 

Opportunity


Trope Identification

I believe that fake news, propaganda, psychological warfare, and other malignant content, come often in the form of repeating tropes: It is as if conspiracy theorists read from the same book, as if racists took the same page, and so did science deniers.


What if we could train systems to identify such tropes and flag them for inspection?

Would it not reduce the cost of human resources amount spent on flagging?

Can this automation release energy for faster appeals arbitration?

Distributed Network

The blockchain, it is often said, is a solution searching for a problem. I believe, however, that content moderation is one such problem, where distributed technology can provide several important benefits:

  • Recurring tropes can be shared through a distributed network, to be stemmed instantly

  • Content tokenization: Harmful content identification will be performed by an international network of specializing organizations

  • A voting mechanism will reward the organization that finds an offense - should the network agree

  • Once flagged, any variant or replicant of harmful content will be removed across all social media

  • Each network handles offenders per its internal policies

  • The social media will funnel the feed for inspection and moderation, thus cutting their internal effort and liability

 

Participating organizations

  • Fact-checking entities

  • Political and rights organizations

  • News media

  • Science organizations


The fact-checking, content and identity monitoring are the proof-of-work needed to generate the monetary reward - once the community (the distributed international network) agrees with it.


 

Benefits

This distributed cross network can provide the content moderation services the industry, the users, and advertisers need, in a trustworthy, timely, and cost efficient way.

 

Value Chain - Content moderation as a Service

  1. The proposed value chain starts with consumer-facing entities - News organizations, Social media - which are in need of content moderation services.

  2. Each content item is sent as a query to the Distributed network layer, which tries first to identified if there are similar occurrences of the abusive content in its databases.

  3. New items are exposed through a challenge clearinghouse to an open network of fact checker, NGOs, and everyone interested in examining content and bring their judgement.

  4. The decision and compensation are made upon a first to answer / Weighted majority rule

  5. Decision is propagated back to the database and to the query generator as a True/False or Compliant/abusive decision, for further processing.


How can content be moderated, and how will it generate money AND objectivity
Distributed Content moderation value chain



134 views0 comments

Recent Posts

See All
bottom of page