Blue Engine
TheRoad Logo


Management consulting. Product. Strategy.

  • Yoel Frischoff

Decentralized Content Moderation

Updated: 5 days ago

How Social Media Can Thrive in a World of Misinformation

State of Content Moderation


For better or worse, social networks have seized the throne of public opinion from established media.Thoroughness, corroboration, fact checking, and even identifiable opinion - have all given way to immediacy.

Although this process had started, and still goes on, in the era of 24/7 live emissions in traditional mass media, the immediacy trend has accelerated by orders of magnitude over social media - as these gained popularity and conquered a significant share of the waking hours of the population worldwide.

The immediacy phenomenon was accompanied by democratisation of expression. This democratization, blessed in itself, brought to the public sphere a multitude of voices and sentiments, some of them new, some old, previously suppressed.

These voices can be only occasionally identiffiable and traceable, while the very nature of social networks allow anonymity, deniability, and help obscure the source of information.

The dissemination mechanisms built in social media, on the other hand, help strengthen messages at lightning speed, and importantly, with little control.

These characteristics, contributing to the rise of social media as king of societal debate, have given rise to a number of threats:

  • Impersonation

  • Bots and fake entities

  • Fake News

  • Propaganda

  • Predatory marketing

  • Incendiary

  • Human trafficking

  • Child Porn

The list does actually go on and on, changing also by political culture.


Although Social media - the darlings of innovation - got away at first with adverse publications by subscribers, they did - and still do - face a double front:

  • Regulation builds up to enforce content moderation

  • Advertisers shy away from controversial content, lest their image and sales are hurt

Reluctantly, social networks have embraced content moderation.

An army of thousands of content moderators, assisted by in-house and third party technologies is employed to monitor content and users.



The industry bears a load of ~$8 billion in 2021, growing fast at 9.3% CAGR.



These moderation efforts are currently siloed between each media platform, with some drawbacks:

  • Arbitrary and opaque policies, both on indentification and containment

  • Significant costs to the organization

  • Animousity of certain political factions

  • Forceful influence by authoritarian regimes (and not only them)

In a way, social networks are caught between the increasing barrage of abuse and the unthankful task of moderation, costing in the billions while fending claims of freedom-of-speech breaches or political bias.

Elon musk, for one, is on a steep learning curve, it seems: A twitter troll gone owner, he is discovering the implications of his actions on an abstract being that depends on the good will and trust of so many stakeholders - from users across the political spectrum, to advertisers, to regulators. He is defintiely for a ride!



There is a challenge for social media as multinational, commercial, entities to monitor the online discourse - as there is an intrinsic tension between censorship and free speech, between political interests and free speech.

The challenge at hand is to help the social media industry, and the society as a whole, to have a content moderation capabilities, that would be

  • Impartial and trustworthy

  • Timely and scaleable

  • Cost effective



Trope Identification

I believe that fake news, propaganda, psychological warfare, and other malignenat content, come often in the form of repeating tropes: It is as if conspiracy theorists read from the same book, as if racists took the same page, and so did science deniers.

What if we could train systems to identify such tropes and flag them for inspection?

Would it not reduce the cost of human resources amount spent on flagging?

Can this automation release energy for faster appeals arbitration?

Distributed Network

The blockchain, it is often said, is a solution searching for a problem. I believe, however, that content moderation is one such problem, where distributed technology can provide several important benefits:

  • Recurring tropes can be shared through a distributed network, to be stemmed instantly

  • Content tokenization: Harmful content identification will be performed by an international network of specialising organizations

  • A voting mechanism will reward the organization that finds an offence - should the network agreee

  • Once flagged, any variant or replicant of harmful content will be removed across all social media

  • Each network handles offenders per its internal policies

  • The social media will funnel the feed for inspection and moderation, thus cutting their internal effort and liability


Participating organizations

  • Fact-checking entities

  • Political and rights organizations

  • News media

  • Science organizations

The fact-checking, content and identity monitoring are the proof-of-work needed to generate the monetary reward - once the community (the distributed international network) agrees with it.



This distributed cross network can provide the content moderation services the industry, the users, and advertisers need, in a trustworthy, timely, and cost efficient way.

27 views0 comments

Recent Posts

See All