A new flurry of tweets from President Trump is pushing the limits of social platform policies designed explicitly to keep users safe from the spread of the novel coronavirus, both online and off.
In a series of rapid-fire messages on Friday morning, Trump issued a call to “LIBERATE” Virginia, Minnesota and Michigan, all states led by Democratic governors. Trump’s tweets promoted protests in those states against ongoing public safety measures, many designed by his own administration, meant to keep residents safe from the virus. Trump also shared the messages on his Facebook page.
In the case of Minnesota, the tweet was not a generic message to his supporters in the state — it referenced a Friday protest event by its name, “Liberate Minnesota.”
In Minnesota, the in-person protest event gathered a group of Trump supporters outside the St. Paul home of Minnesota Governor Tim Walz to protest the state’s ongoing lockdown. According to a reporter on the scene Friday, the protest had attracted attendees in the “low hundreds” so far and few were practicing social distancing or wearing masks. The event was organized on Facebook.
“President Trump has been very clear that we must get America back to work very quickly or the ‘cure’ to this terrible disease may be the worse option!,” the event’s Facebook description states. In a later disclaimer, event organizers encourage attendees to exercise “personal responsibility” at the protest, stating that they “are not responsible for your current health situation or future health.”
The president’s tweets contradict his administration’s own guidance, detailed yesterday in coordination with health experts, on reopening state economies. Earlier this week, Trump claimed that a president has “total authority” to reopen the national economy, a sentiment that his tweets Friday appeared to undermine.
Trump’s calls to action in support of state-based protests would also appear to potentially contradict both Twitter and Facebook’s new rules specific to the pandemic, which in both cases explicitly forbid any COVID-19 content that could result in the real-world spread of the virus.
Over the last month, Facebook and Twitter both rolled out relatively aggressive new policies designed to protect users from content contradicting the guidance of health experts, particularly anything that could result in real-world harm.
Update: According to a Twitter spokesperson, the president’s tweets do not currently violate Twitter’s rules. Twitter does not consider the tweets as worded a “clear call to action” that could pose a health risk. Twitter also did not determine that the tweets were shared with harmful intentions.
Facebook did not provide answers to questions about the protests organized on its platform by the time of publication.
In late March, Twitter updated its safety policy to prohibit any tweets that “could place people at a higher risk of transmitting COVID-19.” The stance banned tweets claiming social distancing doesn’t work as well as anything with a “call to action” that could promote risky behaviors, like encouraging people to go out to a local bar.
On April 1, Twitter again broadened its definition for the kind of harmful COVID-19 content it forbids, stating that it would “continue to prioritize removing content when it has a clear call to action that could directly pose a risk to people’s health or well-being.”
Facebook similarly expanded its platform rules to match the existential health threat posed by the coronavirus. In guidance on its policies for the pandemic, Facebook says that it “remove[s] COVID-19 related misinformation that could contribute to imminent physical harm.” As an example, the company noted that in March it began removing “claims that physical distancing doesn’t help prevent the spread of the coronavirus.”
Social media companies signaled early in the U.S. spread of the coronavirus that they would take health misinformation — and the safety of their users — more seriously than ever. In some instances, this tough talk appears to have manifested in improvements: Facebook, which has generally been more proactive about health misinformation compared to other topics, moved to promote health expertise and limit the spread of misleading coronavirus content on the platform, even announcing that it would notify anyone who had interacted with COVID-19 misinformation with a special message in their news feed.