Smoke on a shadowy background

Photo: Marek Piwnicki | Canva

Recognizing and responding to shadow bans

Social media restrictions can render newsworthy reporting on controversial topics invisible

What is shadow banning?

Shadow banning—the practice of reducing the visibility of social media content without the content creator’s knowledge—can restrict the reach of independent news outlets. 

This concern is underscored for outlets that use social media to engage their communities with reporting on topics deemed controversial by platforms such as Facebook, X (Twitter), Instagram, and TikTok. 

In February 2024, Meta began to restrict content on Instagram that it deemed “political,” which generated a backlash, especially from users, including prominent journalists, whose posts about Israel’s assault on Gaza were demoted, deleted, or denied by Instagram.

Ryan Sorrell, founder and publisher of the Kansas City Defender, told us that social media posts promoting its reporting have been subject to shadow bans and takedowns,  not only because of the issues they cover—including police violence, institutional racism, and Palestine—but also because their reporting uses “unminced language, without euphemisms.”

Motivated by the need to balance freedom of expression with online safety, all social media platforms engage in “content moderation” to determine what content they allow or restrict, and promote or marginalize. 

Shadow bans, arguably the most controversial form of content moderation, restrict the content’s visibility without officially blocking or removing it.

The account owner can continue to use the platform normally, but the shadow ban isolates their account and posts from the general online community. Without awareness of the restrictions, content creators are left powerless to appeal or protest, being vulnerable to what Kelley Cotter has characterized as “black box gaslighting.” 

Users are left to guess whether their content was not seen by other viewers due to a shadow ban, or because it just was not interesting enough to drive engagement.

Why does it matter?

Although major platforms dispute or deny accusations of shadow bans, studies indicate real impacts. For example, Monica Horten, an internet policy analyst, found that Facebook’s “feature block” function created a “sudden plunge” of 93–99 percent in the reach of UK-based Pages, or “not much different from completely unpublishing the page.”

Shadow bans highlight the potential for social media platforms to manipulate public opinion. Tarleton Gillespie has warned that using algorithmic recommendations to reduce the visibility of targeted content has sweeping consequences, influencing “not only what any one user is likely to see but also what society is likely to attend to, take seriously, struggle with, and value.” 

Those consequences disproportionately impact members of marginalized communities, online and offline, as Black TikTok creators and LGBTQ+ content creators on Instagram have called out. As a veiled form of content moderation, shadow bans make it more difficult to hold platforms accountable for restricting (already) marginalized voices.

Here are five recommendations for recognizing and responding to shadow bans of news content 

1. Understand platform guidelines

Some practical tips from the Don’t Delete Art campaign and website are applicable to journalists:

  • Register your account as a professional or business account. This increases the likelihood that you’ll receive information from the platform about whether content you post meets the platform’s standards for recommendation.
  • Review your account status, especially for notifications of recommendation violations.
  • Be sure that users are searching for your full account name. Downranked accounts (accounts that have been reduced in ranking by a social media platform’s algorithm, which can make their content appear less frequently in users’ feeds) often will not appear until their full name is typed into a platform’s Search function.

2. Monitor engagement

Beware of sudden, sharp drops in the reach of your account. You can use analytics provided by each platform to monitor your account’s reach and, on many platforms, the performance of specific posts.

You can also benchmark your content’s reach against similar posts from other accounts with comparable engagement.

There are also third-party, platform-specific tools designed to track evidence of shadow banning. For example, the Shadowban Scanner extension for the Chrome browser and HiSubway’s Twitter Shadowban Test allow you to monitor content reduction on X/Twitter.

3. Beware of “behavior” 

In her study of shadow banning on Facebook, Horten found that posting behavior could trigger a shadow ban, independent of posts’ content.

Avoid posting too frequently, soliciting likes too often, using the same hashtags in every post, and other types of “spammy” behavior. In brief, don’t act like a bot!

Automated content moderation systems were originally deployed to detect spam, and later disinformation. Although much of the controversy around shadow banning focuses on content, automated content moderation systems continue to track behavior as well, reducing the visibility of accounts that engage in online behavior those platforms deem inauthentic, abusive, or manipulative.

4. Consider bypass practices

Many creators with legitimate reasons to address sensitive topics on social media platforms use “algospeak,” coded language designed to evade algorithmic detection by using substitutes for banned terms.

For example, in their study of TikTok creators who post sensitive content about race, gender, or sex and sexuality, Kendra Calhoun and Alexia Fawcett identified four major categories of linguistic innovations to avoid censorship, including creative use of language (unalive in place of “killed”) and word replacement (using accounting for “sex work”); and substitute terms that play on similar sounds (seggsy for “sexy,” droogs for “drugs”). Sorrell, the Kansas City Defender publisher, described blocking letters in terms, such as “racist,” that frequently trigger algorithmic bans.

Although algospeak may provide many social media users with means to avoid censorship, we fear that its use in promoting journalism might undermine the credibility of the reporting for some readers. It may also violate many newsroom’s standards for clear use of language.

For video, publish potentially sensitive text in the video’s image, rather than in its description. This makes it harder for automated content moderation systems to track and flag the sensitive content.

Of course, the fundamental principles of ethical journalism apply: Content should reflect the cardinal professional values of maximizing transparency and minimizing harm in the course of seeking and reporting truth.

5. Use content warnings

“Sensitive content” warnings can help protect your posts from shadow bans and takedowns. Ryan Sorrell described content warnings as “the single biggest protective mechanism” the Kansas City Defender uses to avoid content reduction or removal. 

There are built-in options for adding sensitive content warnings on Instagram, for instance. It’s important to make your warning clear and concise. Avoid generalities, such as “This post may not be suitable for all viewers,” in favor of direct language that clearly represents the content that follows, such as “Sensitive content: gun violence.”

Place the warning prominently. Use a font style and size that is easy to read. Use the post’s caption to contextualize the content.

Preview the content warning and make any necessary adjustments before you post.

Share your experiences with us

Given the potential of shadow bans to restrict vital news stories from going viral on social media, the recommendations presented here are primarily defensive. We hope to build on these recommendations by incorporating additional insights from journalists whose reporting has been subject to shadow banning. Please contact us with any suggestions you may have based on your experience.

We will incorporate those suggestions into the final version of Algorithmic Literacy for Journalists.

At the same time, we need to plan for the long term by seeking changes in platform policies, including third-party oversight, that will hold platforms accountable when they employ content reduction practices to restrict legitimate but potentially controversial content, including independent journalism.


Cite this article

Roth, Andy Lee; and anderson, avram (2024, Sept. 30). Recognizing and responding to shadow bans. Reynolds Journalism Institute. Retrieved from: https://rjionline.org/news/recognizing-and-responding-to-shadow-bans/

Related Stories

Expand All Collapse All
Comments

Comments are closed.