Decentralized Content Moderation as a Service
Updated: Feb 5
How Social Media Can Thrive in a World of Misinformation
State of Content Moderation
For better or worse, social networks have seized the throne of public opinion from established media.Thoroughness, corroboration, fact checking, and even identifiable opinion - have all given way to immediacy.
Although this process had started, and still goes on, in the era of 24/7 live emissions in traditional mass media, the immediacy trend has accelerated by orders of magnitude over social media - as these gained popularity and conquered a significant share of the waking hours of the population worldwide.
The immediacy phenomenon was accompanied by democratisation of expression. This democratization, blessed in itself, brought to the public sphere a multitude of voices and sentiments, some of them new, some old, previously suppressed.
These voices can be only occasionally identiffiable and traceable, while the very nature of social networks allow anonymity, deniability, and help obscure the source of information.
The dissemination mechanisms built in social media, on the other hand, help strengthen messages at lightning speed, and importantly, with little control.
These characteristics, contributing to the rise of social media as king of societal debate, have given rise to a number of threats:
Bots and fake entities
The list does actually go on and on, changing also by political culture.
Although Social media - the darlings of innovation - got away at first with adverse publications by subscribers, they did - and still do - face a double front:
Regulation builds up to enforce content moderation
Advertisers shy away from controversial content, lest their image and sales are hurt
Reluctantly, social networks have embraced content moderation.
An army of thousands of content moderators, assisted by in-house and third party technologies is employed to monitor content and users.
The industry bears a load of ~$8 billion in 2021, growing fast at 9.3% CAGR. In its manual form, it is one of the major cost centers for social media operators, heavily bearing on the bottom line, as attests the abrupt layoff of practically all of Twitter’s moderation team, from the VP and down.
The human cost associated with painstakingly poring over abusive, and sometimes extremely disturbing content is also worth mentioning, with teams and veterans reporting strain and post traumatic disorders.
These moderation efforts are currently siloed between each media platform, with some drawbacks:
Arbitrary and opaque policies, both on indentification and containment
Significant costs to the organization
Animousity of certain political factions
Forceful influence by authoritarian regimes (and not only them)
In a way, social networks are caught between the increasing barrage of abuse and the unthankful task of moderation, costing in the billions while fending claims of freedom-of-speech breaches or political bias.
Elon musk, for one, is on a steep learning curve, it seems: A twitter troll gone owner, he is discovering the implications of his actions on an abstract being that depends on the good will and trust of so many stakeholders - from users across the political spectrum, to advertisers, to regulators. He is defintiely for a ride!
There is a challenge for social media as multinational, commercial, entities to monitor the online discourse - as there is an intrinsic tension between censorship and free speech, between political interests and free speech.
The challenge at hand is to help the social media industry, and the society as a whole, to have a content moderation capabilities, that would be
Impartial and trustworthy
Timely and scaleable
I believe that fake news, propaganda, psychological warfare, and other malignant content, come often in the form of repeating tropes: It is as if conspiracy theorists read from the same book, as if racists took the same page, and so did science deniers.