Is Twitter’s algorithm really the only one responsible? Last week, the account of feminist and anti-racist activist Mélusine and several other accounts were suspended for a simple question : “How do we get men to stop raping?” “After an outcry on the social network, Twitter finally recognized an error, questioning its algorithm.
“We have increased our use of machine learning and automation to take more action on potentially abusive and manipulative content. We want to be clear: although we strive to ensure the consistency of our systems, it may happen that the context usually provided by our teams is lacking, leading us to make mistakes, ”explained Twitter France Thursday at 20 Minutes .
The human behind algorithms
It is true that artificial intelligence still has some progress to make in spotting hateful posts. She does not discern irony (sometimes human either, to tell the truth) and she frankly struggles to understand the context of certain exchanges. As a result, several tweets by LGBT activists have recently been censored for containing the words “dykes” or “queers,” used in a reappropriation of the stigma. Are algorithms that bad?
“Artificial intelligence will not be 100%, admits Isabelle Collet, researcher on gender issues in tech and on equality pedagogy at the University of Geneva. When doing context analysis, the AI does not understand the meaning of the sentence, but it can make analogies with sentences considered to be hateful. By analogy, she can say: There is a 90% chance that this tweet will be hateful because it looks 90% like tweets that have been certified as hateful.
In the field of artificial intelligence, databases are the sinews of war. The more data there is annotated, labeled, classified, the better the algorithm is. However, this base is made up of humans who will decide what constitutes hateful or insulting content. “The human will place the cursor”, emphasizes the researcher. A human intelligence has decided, at the base, what constitutes an insult or not, and this choice is a matter of subjectivity. In short, artificial intelligence only follows what it has been taught.
“How to rape a woman”
In the case of the suspension of feminist accounts, pointing to the limits of the algorithm can question. Last May, a teenager posted a thread on Twitter titled “How to Rape a Woman . ” “For about fifteen tweets, he reiterated these questions and his account was not suspended, recalls Isabelle Collet. I wonder about this pretext of automation. The algorithm should have triggered on ‘how to rape a woman’ if it triggered on ‘how to make men stop raping’ ”. Likewise, @pastadaronne had already used the word “dyke” in posts prior to the one that was hidden.
Some believe these random account suspensions could be the result of mass reporting . Journalist Titiou Lecoq makes this link in Slate about feminist accounts. “It is difficult to see which keywords triggered the suspension,” she writes. Especially since certain accounts which took it over were suspended and not others. (…) Or, another hypothesis, masculinist activists have come together to report the tweets (especially since it is common in these circles to have different accounts, which allows each individual to increase their capacity for nuisance). “
Another automation problem
As coordinated campaigns and raids increase online, couldn’t Twitter identify which accounts are behind these reports? This is indeed an automation problem, but not the one we think. “Take the example of the CSA, from a certain number of complaints, it will look at what is happening. Conversely, Twitter cuts, notes Isabelle Collet. This is a shame because a large number of reports come from problem groups. “
If Twitter has taken on tens of thousands of conspiratorial accounts, why doesn’t it take on the issue of the highly organized, web-crusading masculinists?