Some very interesting thoughts from an online chat with a US West Coast based engineer last night:
(There are) lots of conversation in the Bay about the role of Tech companies in the resurgence of Fascism and anti-intellectualism
So many of the companies here are built on monetizing attention, and that leads to them building systems that create feedback loops and echo chambers that lead to radicalization. Facebook, Youtube, and Twitter all do similar things, and the first two especially push people down radicalization rabbit holes they might not have otherwise entered.
At which point I argued that the rise of fascism in Europe simply needed the means of communication and control of the tools.
You’re very right that authoritarians have always had a base and platforms for communication. I just worry that the current attention based ecosystem cranks that dial all the way to 11
It’s not just the consumers it influences, but the publishers as well. By building systems that rewards content that draws the most eyes and the most clicks, you encourage the publication of things that elicit the strongest, most visceral responses, factuality be damned.
My rational mind suggested that people might look for balance – evidence based and fact based information. And that in most democracies that balance could usually be found in the media. Not everyone watches Fox.
I don’t particularly believe that people online actively look for balance. I think they take what their apps feed them, and their apps feed them affirmation
The problem is, this isn’t a problem any of the social media/tech companies can really fix. It’s a problem inherent to the economics of online advertising. It really is the inevitable conclusion of digital capitalism.
My question was what comes next, does the pendulum move back to something more balanced,) assuming that facebook, twitter must have a limited life expectancy – much as newspaper circulation has died
In the end, I don’t see facebook, youtube, or twitter going away. However I think we might need legislation that limits how content gets recommended to people.
I’m not a strong believer in limiting what can be posted, barring hate speech/calls to violence. However, forcing companies to not design recommendation algorigthms based on optimising user usage time/ad revenue should help slow the radicalization death spiral.
I suspect that is the start of a much longer conversation but I was already feeling out of my depth.