Online abuse remains the big hairy monster in the room for platforms powered by user-generated content. Twitter especially has had some very sizable and public problems with problem users, taking flak in recent years for being the go-to social media conduit for orchestrated misogynistic campaigns, such as the #Gamergate example.
Or more recently for being the training platform where trolls were able to teach Microsoft’s ingenue AI chatbot Tay how to be racist and sexist double quick.
Twitter knows it has a problem with users appropriating its platform to spread hate speech and/or harass others. Former CEO Dick Costolo conceded back in February 2015 that ‘we suck at dealing with abuse‘.
Since then it has stepped up efforts to combat the spread of abusive tweets, including launching a range of tools to help users better manage their interactions with others; bringing in algorithmic filtering to try to limit the spread of abusive tweets; and earlier this year forming a trust and safety council — with the aim of taking external input on how to balance a free speech mission without enabling orchestrated harassment and abuse.
But despite all its efforts the abusive monster has clearly not been magicked away. And now a new study by UK thinktank Demos flags up how misogyny remains an ongoing issue on Twitter, although the study authors are keen to point out it’s by no means the only platform with problems in this regard.
“It’s important to note that misogyny is prevalent across all social media, and we must make sure that the other big tech companies are also involved in discussions around education and developing solutions,” writes Demos researcher Alex Krasodomski-Jones in a statement. “This is less about policing the internet than it is a stark reminder that we are frequently not as good citizens online as we are offline.”
Still, Twitter remains an interesting case given how the structure of its network can more easily facilitate the spread of abuse vs other more closed/siloed types of social networks. Coupled with the fact it’s been trying to provide better tech tools and strategies to combat abuse for several years.
Safe to say, no quick fix looks likely to solve this complex problem. And even sustained strategies aren’t necessarily a panacea. But that just underlines the need for ongoing effort to try to shape online communities for the better over the long term.
How many misogynistic tweets?
Demos’ study analysed 1.5 million tweets sent by UK Twitter users over the period April 23 to May 15 2016 which included the words ‘slut’ or ‘whore’. The thinktank then used its own in-house Natural Language Processing tool to filter the collected tweets in order to differentiate between actively aggressive, conversational and self-identification uses of the two terms.
Over the three-week collection period it says it found 6,500 unique users being targeted by 10,000 explicitly aggressive and misogynistic tweets.
While, internationally, it says more than 200,000 aggressive tweets using the same terms were sent to 80,000 people in the same three-week period.
It also claims it found 50 per cent of the propagators of the abusive tweets to be women — although it’s unclear how it determined/confirmed the gender of the senders (given that Twitter names/profile photos are easily faked). We’ve asked and will update this post with any response.
Update: A Demos research confirmed the algorithm is looking at names and descriptions to make a guess on gender, adding: “It’s a pretty good guess — we think it’s about 85 per cent accurate on a three way split of male/female/organisation. As you say, any kind of natural language processing is inherently probabilistic and it will have made mistakes, which is why we have steered well clear of naming anybody who might be throwing abuse about online, but with enough good guesses on a dataset of almost half a million the overall percentages are good.”
The thinktank conducted a previous study into misogyny on Twitter back in 2014 in which it found more than 100,000 Tweets mentioning the word ‘rape’ were sent between December 26, 2013 and February 9, 2014 — more than 1 in 10 of which appeared were said to be “threatening in nature”.
That earlier study also found almost as many abusive tweets were being sent by women as by men.
We’re reached out to Twitter with questions about the study and will update this post with any response. Update: In a statement provided to TechCrunch, Twitter’s EMEA head of trust & safety outreach, Kira O’Connor, said:
Hateful conduct has no place on the Twitter platform and is a violation of our terms of service. In addition to our policies and user controls, such as block, mute and our new multiple tweet reporting functionality, we work with civil society leaders and academic experts to understand the challenge that exists. Our ambition, in tandem with addressing abusive behaviour, is to reach a position where we can leverage Twitter’s incredible capabilities to empower positive voices, to challenge prejudice and to tackle the deeper root causes of intolerance in society.
Combating hate speech on online platforms generally is one of the priorities for the European Commission’s Digital Signal Market Strategy. It has been working on a code of conduct with industry players that is due to be presented in the coming weeks.