‘Algospeak’ is changing our language in real time

Placeholder while article actions load

“Algospeak” is becoming increasingly common on the internet as people seek to bypass content moderation filters on social media platforms like TikTok, YouTube, Instagram and Twitch.

Algospeak refers to keywords or turns of phrase that users have adopted in an effort to create a brand-safe lexicon that will prevent content moderation systems from removing or downgrading their posts. For example, in many online videos, it is common to say “lifeless” instead of “dead”, “SA” instead of “sexual assault” or “spicy eggplant” instead of “vibrator”.

As the pandemic pushed more people to communicate and express themselves online, algorithmic content moderation systems have had an unprecedented impact on the words we choose, particularly on TikTok, and have given rise to a new form of the internet. Aesop’s tongue.

Unlike other major social platforms, the primary way content is distributed on TikTok is through an algorithmically curated “For You” page; Having a following does not guarantee that people will see your content. This change has led average users to tailor their videos primarily to the algorithm, rather than followers, which means that adhering to content moderation rules is more crucial than ever.

When the pandemic hit, people on TikTok and other apps started referring to it as “Backstreet Boys Reunion Tour” or calling it “panini” or “panda express”, as the platforms downplay videos that mention the pandemic by name in an effort to combat misinformation. When young people started talking about fighting mental health, they talked about “running out of life” to have frank conversations about suicide without algorithmic punishment. Sex workers, who have long been censored by moderation systems, refer to themselves on TikTok as “accountants” and use the corn emoji as a substitute for the word “porn”.

As discussions from major events are filtered through algorithmic content delivery systems, more users are changing their language. Recently, when talking about the invasion of Ukraine, people on YouTube and TikTok have used the sunflower emoji to represent the country. By encouraging fans to follow them elsewhere, users will say “blink in mess” for “link in bio.”

Euphemisms are especially common in radicalized or harmful communities. Pro-anorexia eating disorder communities have long embraced mild word variations to get around restrictions. One paper from the School of Interactive Computing at the Georgia Institute of Technology found that the complexity of such variants even increased over time. Last year, anti-vaccine groups on Facebook they started changing their names to “dance party” or “dinner party” and anti-vaccine influencers on Instagram used similar keywords, referring to vaccinated people as “swimmers.”

Adapting language to avoid scrutiny predates the Internet. Many religions have avoided pronouncing the devil’s name so as not to invoke him, while people living in repressive regimes developed code words to discuss taboo subjects.

Early Internet users used alternative spelling or “leetspeak” to bypass word filters in chat rooms, picture boards, online games, and forums. But algorithmic content moderation systems are more pervasive on the modern internet, and often end up silencing marginalized communities and important discussions.

During the YouTube “apocalypse” in 2017, when advertisers withdrew their dollars from the platform for fear of insecure contentLGBTQ creators talked about having videos demonetized for saying the word “gay”. Some started using the word minus or substituting other words to keep their content monetized. More recently, TikTok users have started saying “cornucopia” instead of “homophobia,” or saying they are members of the “leg booty” community to indicate that they are LGBTQ.

“There’s a line we have to follow, it’s a never-ending battle of saying something and trying to get the message across without saying it directly,” said Sean Szolek-VanValkenburgh, a TikTok creator with more than 1.2 million followers. “It disproportionately affects the LGBTQIA community and the BIPOC community because we are the people who create that talk and we create the colloquia.”

Conversations about women’s health, pregnancy and menstrual cycles on TikTok are also consistently ranked low, said Kathryn Cross, a 23-year-old content creator and founder of Anja Health, a startup offering blood banks from the umbilical cord. She replaces the words “sex”, “period”, and “vagina” with other words or spells them with symbols in the captions. Many users say “nip nops” instead of “nipples”.

“It makes me feel like I need a disclaimer because I feel like it makes you look unprofessional to have these misspelled words in your captions,” he said, “especially for content that is supposed to be serious and medically inclined.”

Because online algorithms often flag content that mentions certain words, without context, some users avoid pronouncing them altogether, simply because they have alternate meanings. “You have to say ‘crackers’ when you’re literally talking about crackers right now,” said Lodane Erisian, Twitch Creator Community Manager (Twitch considers the word “cracker” an insult). Twitch and other platforms have even gone so far as to remove certain emotes because people were using them to communicate certain words.

Black and trans users, and those from other marginalized communities, often use algorithmic language to discuss the oppression they face, swapping words for “white” or “racist.” Some are too nervous to say the word “white” and simply hold their palm up to the camera to refer to white people.

“The reality is that technology companies have been using automated tools to moderate content for a long time, and although it is promoted as sophisticated machine learning, it is often just a list of words that they think are problematic,” said Ángel Díaz. , a professor at the UCLA School of Law who studies technology and racial discrimination.

In January, Kendra Calhoun, a postdoctoral researcher in linguistic anthropology at UCLA, and Alexia Fawcett, a linguistics doctoral student at UC Santa Barbara, gave a presentation on language on TikTok. They described how, by self-censoring the words in TikToks captions, new algospeak codewords emerged.

TikTok users now use the phrase “le dollar bean” instead of “lesbian” because that’s how TikTok’s text-to-speech feature pronounces “Le$bian,” a censored way of writing “lesbian” that users users believe it will bypass content moderation.

Evan Greer, director of Fight for the Future, a nonprofit digital rights advocacy group, said trying to stomp on specific words on platforms is foolish.

“One, it doesn’t really work,” he said. “People who use platforms to stage real damage are pretty good at figuring out how to get around these systems. And two, it leads to the collateral damage of literal speech.” Trying to regulate human speech on a scale of billions of people in dozens of different languages ​​and trying to deal with things like humor, sarcasm, local context and slang cannot be done by simply underestimating certain words, Greer argues.

“I feel like this is a good example of why aggressive moderation is never going to be a real solution to the harms we see in the business practices of big tech companies,” he said. “You can see how slippery this slope is. Over the years, we have increasingly seen the misguided demand from the general public that platforms remove more content quickly, regardless of the cost.”

Big TikTok creators have created shared Google Docs with lists of hundreds of words that they believe the app’s moderation systems find problematic. Other users keep a running log of terms they think have sped up certain videos, trying to reverse engineer the system.

Zuck got me for”, a site created by a meme account manager who goes by the name Ana, is a place where creators can upload meaningless content that was banned by Instagram’s moderation algorithms. In a manifesto about her project, she wrote: “Creative freedom is one of the only positive aspects of this burning online hell in which we all exist…As the algorithms get tougher, it’s the independent creators those who suffer.”

It also describes how to talk online to get around filters. “If you have violated the terms of service, you may not be able to use profanity or negative words like ‘hate,’ ‘kill,’ ‘ugly,’ ‘stupid,’ etc,” he said. “I often write, ‘I opposite of love xyz’ instead of ‘I hate xyz’”.

The Online Creators Association, a labor advocacy group, also issued a list of demands, asking TikTok for more transparency in how it moderates content. “People have to tone down their own language so they don’t offend these all-seeing, all-knowing gods of TikTok,” said Cecelia Gray, creator of TikTok and co-founder of the organization.

TikTok Deals an online resource center for creators looking learn more about its recommendation systems and has opened multiple transparency and accountability centers where guests can learn how the app’s algorithm works.

Vince Lynch, CEO of IV.AI, an artificial intelligence platform for understanding language, said that in some countries where moderation is stricter, people end up building new dialects to communicate. “It becomes real sublanguages,” she said.

But as algospeak becomes more popular and replacement words become common slang, users find they have to get more and more creative to get around filters. “It becomes a game of whack the mole,” said Gretchen McCulloch, a linguist and author of “because internet”, a book about how the Internet has shaped language. As platforms begin to notice people saying “seggs” instead of “sex,” for example, some users report that they believe even the replacement words get flagged.

“We end up creating new ways of speaking to avoid this kind of restraint,” said Diaz of the UCLA School of Law, “then we end up adopting some of these words and they become common vernacular. Everything is born from this effort to resist moderation.”

This does not mean that all efforts to eradicate bad behavior, harassment, abuse and misinformation are fruitless. But Greer argues that it is the root problems that should be prioritized. “Aggressive moderation will never be a real solution to the damage we see in the business practices of big tech companies,” he said. “That is a task for policymakers and to build better things, better tools, better protocols and better platforms.”

Ultimately, he added, “you will never be able to sanitize the Internet.”

Leave a Reply

Your email address will not be published.