Amnesty International used machine-learning to quantify the scale of abuse against women on Twitter

0
18
- Advertisement -

A new study by Amnesty International and Element AI puts number to a problem many women already know about: that Twitter is a cesspool of harassment and abuse. Conducted with the help of 6,500 volunteers, the study, billed by Amnesty International as “the largest ever” into online abuse against women, used machine-learning software from Element AI to analyze tweets sent to a sample of 778 women politicians and journalists during 2017. It found that 7.1%, or 1.1 million, of those tweets were either “problematic” or “abusive,” which Amnesty International said amounts to one abusive tweet sent every 30 seconds.

On an interactive website breaking down the study’s methodology and results, Amnesty International said many women either censor what they post, limit their interactions on Twitter, or just quit the platform altogether. “At a watershed moment when women around the world are using their collective power to amplify their voices through social media platforms, Twitter’s failure to consistently and transparently enforce its own community standards to tackle violence and abuse means that women are being pushed backwards towards a culture of silence,” stated the human rights advocacy organization.

Amnesty International, which has been researching abuse against women on Twitter for the past two years, signed up 6,500 volunteers for what it refers to as the “Troll Patrol” after releasing another study in March 2018 that described Twitter as a “toxic” place for women. The Troll Patrol’s volunteers, who come from 150 countries and range in age from 18 to 70 years old, received training about constitutes a problematic or abusive tweet. Then they were shown anonymized tweets mentioning one of the 778 women and asked whether or not the tweets were problematic or abusive. Each tweet was shown to several volunteers. In addition, Amnesty International said “three experts on violence and abuse against women” also categorized a sample of 1,000 tweets to “ensure we were able to assess the quality of the tweets labelled by our digital volunteers.”

The study defined “problematic” as tweets “that contain hurtful or hostile content, especially if repeated to an individual on multiple occasions, but do not necessarily meet the threshold of abuse,” while “abusive” meant tweets “that violate Twitter’s own rules and include content that promote violence against or threats of people based on their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.”

In total, the volunteers analyzed 288,000 tweets sent between January to December 2017 to the 778 women studied, who included politicians and journalists across the political spectrum from the United Kingdom and United States. Politicians included members of the U.K. Parliament and the U.S. Congress, while journalists represented a diverse group of publications including The Daily Mail, The New York Times, Guardian, The Sun, gal-dem, Pink News, and Breitbart.

Then a subset of the labelled tweets was processed using Element AI’s machine-learning software to extrapolate the analysis to the total of 14.5 million tweets that mentioned the 778 women during 2017. (Since tweets weren’t collected for the study until March 2018, Amnesty International notes that the scale of abuse was likely even higher because some abusive tweets may have been deleted or made by accounts that were suspended or disabled). Element AI’s extrapolation produced the finding that 7.1% of tweets sent to the women were problematic or abusive, amounting to 1.1 million tweets 2017.

Black, Asian, Latinx, and mixed race women were 34% more likely to be mentioned in problematic or abusive tweets than white women. Black women in particular were especially vulnerable: they were 84% more likely than white women to be mentioned in problematic or abusive tweets. One in 10 tweets mentioning black women in the study sample was problematic or abusive, compared to one in 15 for white women.

“We found that, although abuse is targeted at women across the political spectrum, women of color were much more likely to be impacted, and black women are disproportionately targeted. Twitter’s failure to crack down on this problem means it is contributing to the silencing of already marginalized voices,” said Milena Marin, Amnesty International’s senior advisor for tactical research, in the statement.

Breaking down the results by profession, the study found that 7% of tweets that mentioned the 454 journalists in the study were either problematic or abusive. The 324 politicians surveyed were targeted at a similar rate, with 7.12% of tweets that mentioned them problematic or abusive.

Of course, findings from a sample of 778 journalists and politicians in the U.K. and U.S. is difficult to extrapolate to other professions, countries, or the general population. The study’s findings are important, however, because many politicians and journalists need to use social media in order to do their jobs effectively. Women, and especially women of color, are underrepresented in both professions, and many stay on Twitter simply to make a statement about visibility, even though it means dealing with constant harassment and abuse. Furthermore, Twitter’s API changes means many third-party anti-bullying tools no longer work, as technology journalist Sarah Jeong noted on her own Twitter profile, and the platform has yet to come up with tools that replicate their functionality.

Amnesty International’s other research about abusive behavior towards women on Twitter includes a 2017 online poll of women in 8 countries, and an analysis of abuse faced by female members of Parliament before the UK’s 2017 snap election. The organization said the Troll Patrol isn’t about “policing Twitter or forcing it to remove content.” Instead, the organization wants the platform to be more transparent, especially about how the machine-learning algorithms it uses to detect abuse.

Because the largest social media platforms now rely on machine learning to scale their anti-abuse monitoring, Element AI also used the study’s data to develop a machine-learning model that automatically detects abusive tweets. For the next three weeks, the model will be available to test on Amnesty International’s website in order to “demonstrate the potential and current limitations of AI technology.” These limitations mean social media platforms need to fine-tune their algorithms very carefully in order to detect abusive content without also flagging legitimate speech.

“These trade-offs are value-based judgements with serious implications for freedom of expression and other human rights online,” the organization said, adding that “as it stands, automation may have a useful role to play in assessing trends or flagging content for human review, but it should, at best, be used to assist trained moderators, and certainly should not replace them.”

TechCrunch has contacted Twitter for comment.

Written by Catherine Shu
This news first appeared on https://techcrunch.com/2018/12/18/amnesty-international-used-machine-learning-to-quantify-the-scale-of-abuse-against-women-on-twitter/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29 under the title “Amnesty International used machine-learning to quantify the scale of abuse against women on Twitter”. Bolchha Nepal is not responsible or affiliated towards the opinion expressed in this news article.