Blacklisting on social media.
Shadow Banning
Is harder than ever to rack up a decent amount of views/likes on any social network today. Meta the parent company of Facebook and Instagram, deployed a machine learning model to identify and reduce the reach of people and pages who rely on “engagement bait”. That’s not an open secret anymore. If someone can make people engage with his account on any platform he will send a signal telling admins to show posts more and more. This signal can be also stopped by techs internally. But at the same time, more than a dozen regular users and business account holders were saying engagement on their accounts plunged suddenly. Without any explanation or warning. What’s really going on? It’s a content moderation strategy used on countless internet forums over the years. If the content is loaded with repetitive hashtags, especially seemingly banned ones, or using automation tools or a scheduler that hacks the platform API, probably they will suffer a very noticeable loss in engagement and reach. This censorship narrative is known in the industry as blacklisting or shadowban.
The currency of social networks is attention. Thus, shadowban, in theory, diminish the ways in which that attention might be earned. Although the phrase “shadowban” isn’t official in March 2021, here on the Medium page, Facebook VP of Global Affairs Nick Clegg acknowledged that the news feed also downranks content with exaggerated headlines. (clickbait) On all platforms, posts can be hidden or limited and may leave accounts unfindable on searches. Therefore their content no longer appears. Shadow bans moderation tools evolved in the 1980s and 1990s with the use of flags. Now flagged means that someone has a limited approach because exhibits immature behavior and can’t be trusted with access to the features of the social network.
Possible Shadowbans Reasons
1. The use of bots — this is a break to the platform’s terms of service. Using bots that “build” following or services as “growth services” will increase the chances to be banned permanently.
2. Bad hashtags — Unsuitable content, like nudity, spam, or racially offensive may sometimes “flood” a hashtag so it becomes inaccessible. It might be so bad that internally the social media platform decides to restrict it. All accounts still using them will be shadowbanned.
3. Reported accounts — if our accounts are blocked or reported so many times the moderation team will take action. Sometimes the abundance of reports can place accounts in the shadowban club.
4. Posting, commenting, engaging, or following in excess. Every platform has a number of activities that accounts/users can take every day. If someone goes against those numbers it will be sent to the club of shadows. (On Instagram do not exceed 150 likes, 50 comments, and 50 follows/unfollows every hour. On TikTok do not follow more than 50 accounts or like more than 100 posts every day. On Twitter do not follow more than 80 accounts and post 50–80 tweets per day. Only verified Twitter accounts are able to follow 1.000 accounts per day. On Reddit, there’s a subreddit dedicated to figuring out whether we’ve been shadowbanned. There a bot will give us the answer. LinkedIn has recently imposed a new weekly limit of 100 connection requests per user. Facebook is the only one that is working based on the time span and how visible are users. Some accounts will send 30 messages and after that, they will be alright, flagged, or they will be suspended. Nothing is certain or transparent.)
Most shadow-banned accounts said that after disappearing for a few days and scaling back their content, they were able to be back on track. No big issue for them. But an increasing debate erupted when rumors that Twitter has begun ‘shadowbanning’ have been confirmed by a source inside the company. To keep those debates from getting ugly no social networks admitted using this technique. So part of the current confusion happened because conspiracy theorists came into play? Really? Nope! not all of this confusion is bad. Most of us want some kind of moderation. However, recently social networks have abused this feature in order to make users pay for engagement. Common!! The moderation algorithms favor content based on who follows who and what is trending. Shadowbans tend to happen in waves based on different suppositions disrespecting any previous rules. Thus, this approach excluding posts from discoverability was not explained to the public in terms of services at all stages of their operations creating a mess. This secrecy was counterproductive demonstrating a complete disregard for users. Aside from a rapid drop in reach people might not find out why, or even whether are in that position. No clear guidance on where the line is. Keeping users with less info has no doubt played a big role in turning the term “shadowban” into a catchphrase. Now people are seeing big correlations between online censorship and algorithmic intervention. Shadowbanning sounds quite nefarious like a big entity that is censoring us and conspiring to force our means of self-expression. A perfect storm where each social post that fails in traction becomes another piece of evidence. In reality, it’s indeed a typhoon stuck between a rock and a clear shore. Offering too much transparency about automated systems will make it easier for bad actors to understand and bypass them therefore moderation systems are bound to secrecy forever.