Facebook and Twitter to provide Brexit disinformation reports soon

Facebook and Twitter to provide Brexit disinformation reports soon


A UK parliamentary committee that’s investing fake news has been told by Facebook and Twitter they will provide information relating to Russian interference during the UK’s 2016 Brexit referendum vote in the coming weeks.

With election disinformation being publicly interrogated in the US, questions have increasingly been asked in the UK about whether foreign government agents also sought to use social channels to drive Brexit propaganda and sway voters.

Last month Damian Collins, the chair of the digital, culture, media and sport committee, wrote to Facebook and Twitter asking them to look into whether Russian-backed accounts had been used to try to influence voters in the June 2016 in/out EU referendum.

The Guardian reports that Collins has also asked senior representatives from the two companies to give evidence on the reach of fake news at the British embassy in Washington in February.

Earlier this month, the UK prime minister cranked up the political pressure by publicly accused the Russian government of seeking to “weaponize information” by planting fake stories and photoshopped images to try to meddle in elections and sow discord in the West.

In a letter sent to Collins on Friday, Twitter confirmed it would be divulging its own findings soon, writing: “We are currently undertaking investigations into these questions and intend to share our findings in the coming weeks.”

Also responding to the committee last week, Facebook noted it had been contacted by the UK’s Electoral Commission about the issue of possible Russian interference in the referendum, as part of enquiries it’s making into whether the use of digital ads and bots on social media broke existing political campaigning rules.

“We are now considering how we can best respond to the Electoral Commission’s request for information and expect to respond to them by the second week of December. Given that your letter is about the same issue, we will share our response to the Electoral Commission with you,” Facebook writes.

We understand that Google has also been asked by the Electoral Commission to provide it with information pertaining to this probe.

Meanwhile, the UK’s data protection watchdog is conducting a parallel investigation into what it describes as “the data-protection risks arising from the use of data analytics, including for political purposes”.

Where Brexit is concerned, it’s not yet clear how significant the impact of political disinformation amplified via social media was to the outcome of the vote. But there clearly was a disinformation campaign of sorts.

And one that prefigured what appears to have been an even more major effort by Kremlin agents to deflect voters in the US presidential election, just a few months later.

After downplaying the impact of ‘fake news’ on the election for months, Facebook recently admitted that Russian-backed content could have reached as many as 126 million US users over the key political period.

Earlier this month it also finally admitted to finding some evidence of Brexit disinformation being spread via its platform. Though it claimed it had not found what it dubbed “significant coordination of ad buys or political misinformation targeting the Brexit vote”.

Meanwhile, research conducted by a group of academics using Twitter’s API to look at how political information diffused on the platform around the Brexit vote — including looking at how bots and human users interacted — has suggested that more than 156,000 Russian accounts mentioned #Brexit.

The researchers also found that Russian accounts posted almost 45,000 messages related to the EU referendum in the 48 hours around the vote (i.e. just before and just after).

While another academic study reckoned to have identified 400 fake Twitter accounts being run by Kremlin trolls.

Twitter has claimed that external studies based on tweet data pulled via its API cannot represent the full picture of how information is diffused on its platform because the data stream does not take account of any quality filters it might also be applying, nor any controls individual users can use to shape the tweets they see.

It reiterates this point in its letter to Collins, writing:

… we have found studies of the impact of bots and automation on Twitter necessarily and systematically underrepresent our enforcement actions because these defensive actions are not visible via our APIs, and because they take place shortly after content is created and delivered via our streaming API.

Furthermore, researchers using an API often overlook the substantial in-product features that prioritize the most relevant content. Based on user interests and choices, we limit the visibility of low-quality content using tools such as Quality Filter and Safe Search — both of which are on by default for all of Twitter’s users and active for more than 97% of users.

It also notes that researchers have not always correctly identified bots — flagging media reports which it claims have “recently highlighted how users named as bots in research were real people, reinforcing the risks of limited data being used to attribute activity, particularly in the absence of peer review”.

Although there have also been media reports of the reverse phenomenon: i.e. Twitter users who were passing themselves off as ‘real people’ (frequently Americans), and accruing lots of retweets, yet who have since been unmasked as Kremlin-controlled disinformation accounts. Such as @SouthLoneStar.

Twitter’s letter ends by seeking to play down the political influence of botnets — quoting the conclusion of a City University report that states “we have not found evidence supporting the notion that bots can substantively alter campaign communication”.

But again, that study would presumably have been based on the partial view of information diffusion on its platform that Twitter has otherwise complained does not represent the full picture (i.e. in order to downplay other studies that have suggested bots were successfully spreading Brexit-related political disinformation).

So really, it can’t have it both ways. (See also: Facebook selling ads on its platform while trying to simultaneously claim the notion that fake news can influence voters is “crazy”.)

In its letter to Collins, Twitter does also say it’s “engaged in dialogue with academics and think tanks around the world, including those in the UK, to discuss potential collaboration and to explore where our own efforts can be better shared without jeopardizing their effectiveness or user privacy”.

And at least now we don’t have too much longer to wait for its official assessment of the role Russian agents using its platform played in Brexit.

Albeit, if Twitter provided full and free access to researchers so that the opinion-influencing impact of its platform could be more robustly studied the company probably still wouldn’t like all the conclusions being drawn. But nor would it so easily be able to downplay them.

Featured Image: Erik Tham/Getty Images

Source link

Leave a Reply

Your email address will not be published.