TikTok has a moderation problem, many believe, as backlash mounts against the app in response to how content is filtered for the For You page.
For those that don’t know, TikTok is a video content platform used by many to share videos from their lives, their work, and even mini documentaries. The For You page is highly tailored, the algorithm uses your past follows, likes, and even interests on other apps to cultivate a specific experience.
Now this For You page alone, and its accuracy, has come across a lot of skepticism as many find its practices invasive. Many feel like their spoken words are picked up by the app and used in content curation. Many go even further, though, and express concerns about how TikTok moderates.
In most regions, TikTok has dedicated moderators that filter content either by tags, algorithmic patterns, or user reports. This role has specific guidelines on what or what not to deliver to devices, and these guidelines have not been publicly released to either users or creators.
It’s at this stage that many are starting to have concerns. This started with leaked documents highlighted by The Intercept in March, 2020. They publish two internal TikTok documents, both translated from the original language (TikTok is owned by ByteDance, a company based in Beijing).
One lays out bans for ideologically undesirable content in livestreams, and another describes algorithmic punishments for unattractive and impoverished users.
TikTok spokesperson Josh Gartner told The Intercept that “most of” the livestream guidelines reviewed by The Intercept “are either no longer in use, or in some cases appear to never have been in place,” but would not provide specifics. Regarding the policy of suppressing videos featuring unattractive, disabled, or poor users, Gartner stated that the rules “represented an early blunt attempt at preventing bullying, but are no longer in place, and were already out of use when The Intercept obtained them.”
This wasn’t the only time that TikTok’s moderation guidelines have hit the pages of tech journalism. According to the MIT Technology Review in November of 2020, another whistleblower who spoke with Netzpolitik, revealed more about the problematic ways content is filtered.
controversial content on the app is divided into the categories of “deleted,” “visible to self” (meaning other users can’t see it), “not recommended,” and “not for feed.” Videos in these last two categories won’t be curated by the main TikTok discovery engine, and “not for feed” also makes a video harder to find in search.
According to the guidelines, most political content during election periods should be marked “not recommended.” Political content includes everything from partisan speeches to party banners. Police content, including filming inside a police station or jail, is marked “not for feed.”
This is why many users have kept an eye on their feeds and the way TikTok operates. TikTok still hasn’t released its content moderation guidelines, and working to get these released has become a rallying point for speaking out against the app.
One creator, @zevulous, has had a series going for 86 days at time of writing asking TikTok to release the content moderation guidelines.
“For 86 days I’ve asked TikTok to release the content moderation guidelines,” @zevulous sings over guitar. “To explain how racists, sexists, homophobes, antisemites, literally nazis, xenophobes, and transphobes, the list goes on, why are they allowed to make content and not get taken down? When educational creators make content addressing the hate they get taken down.”
Be First to Comment