Social Networks, Troll Actors and Taxation Strategies
Some notes from the discussion between @Chamath and @Lexfridman on the Lex Fridman Podcast at 1:03:00.
Problem
Malevolent actors (i.e. trolls, toxicity, wickedness) degrade product experience within social networks.
Solution
Networks often implement “taxes” in order to disincentivize ill-behaviour.
Real Identity
Reduce bad behaviour by requiring real world reputation costs
Addition of real identity to a social networks creates a reputation cost to virulent behaviours. In a highly networked, globalized, world the time-to-live of information asymmetries approaches zero. In other words, high viral R value. While reputation costs can improve the malevolent actor problem, it also strongly reduces the radius of discourse given “skin in the game” of stating a non-consensus opinions.
Economics
Pay per use to amplify a given message
An untried (to my awarenesss) approach to social network taxation given Cold Start Problem [1]. In essence, like the advertising model, apply an economic value to the amplification of a message. The more an actor wishes to amplify a statement the higher the stake or value required to continually do so (e.g. CPM for Retweets).
Balanced Algorithms
Try to improve the “quality” of participants by providing a balanced information set
Also generally opaque as to whether or not this strategy is imposed. Most networks function as an engine to increase product engagement: “show me more of what I want”. This creates the “echo chamber” effect. The Balanced Algorithm approach threatens this in some sense because it attempts to surface content for which the participants may not be interested in. The aim is to improve the quality of participant knowledge by encouraging balanced information from both sides of a discussion.
Footnotes:
[1] https://www.amazon.com/Cold-Start-Problem-Andrew-Chen-ebook/dp/B08HZ5XY7X