In 1999 the eponymous owner of the popular proto-blog Stile Project wrote a post called “The Mercury Effect.”
In it Stile described receiving a video from some fans who had, if I remember correctly, tortured a stray cat to death in his honor; they appeared to have been inspired by the other shocking material he’d posted and solicited. Stile briefly discussed the despondency into which this event had plunged him and his failing hopes for the human race. The title referred to the madness visited upon haberdashers by an invisible force, something to which he perhaps sensed a modern analog.
He concluded: “Something wonderful is going to happen at midnight.” That night, he turned on his webcam’s live broadcast function, stepped onto a chair, and hung himself.
Or appeared to. A few days and a great deal of speculation later, it was revealed that the whole thing was a hoax, something Stile was fond of perpetrating — though this one was especially cruel. All this material has passed from the internet’s memory (even Archive.org’s), but I’ve never been able to forget it.
This horrible little episode seems a to me a spiritual precursor to this weekend’s all-too-real socially promoted murder, and the issues that have resurfaced in its wake. But for all the talk about content moderation, machine learning, flag-monitoring algorithms, the problem isn’t the platform, and it isn’t one that can be solved by the platform. That’s because the problem is people.
I wrote a while back in Hate that not only is it naive to think the tools we create won’t be used for evil, but it’s irresponsible to pretend so. This is just another example of that. Connect depressed people with a support network here, and you connect white supremacists with gun dealers there. You make a forum for supporting recent immigrants here, and you make one for choosing women to harass there. Let a zoo share a baby giraffe with millions, let someone else stream the murder of a stranger.
This is a direct, unavoidable consequence of the tools; they’re not being “abused” or “misappropriated.” Routers and switches don’t care if they relay coding tutorials or child porn, just like a car doesn’t care whether you drive it into a garage or a crowd.
By empowering people to broadcast themselves, you empower the meek and the oppressed equally as much as the dangerous and the hateful.
What did you expect? Those people are out there in their millions. They want you to experience the extent of their hate just as much as a lonely kid wants to get support from her peers. As with the Morlocks in The Time Machine, every once in a while those of us living in blissful ignorance are reminded of their presence by some horrific act. That sort of thing was always happening, but now you’re aware of it. Thanks, internet.
Pick your fail state
You won’t like the solutions.
The first one is: disconnect. Don’t use the tools of the information age to connect with the world at large. I think we can all agree that it’s a little late to try putting that particular genie back in the bottle. Even if millions of people submitted themselves to a modern asceticism and denied themselves access to social media and other communication tools, it accomplishes nothing. If anything, it merely moves the needle of the remaining online population towards the side of extreme sharing. So we can forget about disconnecting.
The second one is: submit to extremely invasive content monitoring. Live TV has faced a problem that is at least superficially similar and networks’ solution — delayed live broadcasting and someone with their finger on the “cut to commercial” button — works, after a fashion. But the volume of material put on Facebook, Twitter, Instagram and so on is such that this approach is quickly rendered absurd. Even hundreds of thousands of moderators assisted by the latest tools struggle to keep up with the fraud, gore, and porn that would otherwise engulf the web.
Could machine learning algorithms eventually learn the difference between a sleeping person and a dead one? A scene from an action move and a real murder? There is great promise here but the flip side of that is the idea that everything you create will be analyzed frame by frame, every action categorized and recorded with a granularity that may creep out even the most permissive and laissez-faire of us. And even if the machines had their way with our content, it would still require an army of humans to verify each decision. Platforms have already learned that lesson.
The only feasible way to vet content quickly and accurately is through the community, but it is in order that it may be provided to that community that the content must be vetted. It’s a Gordian Knot, and us without a sword.
The third “solution” is to admit there isn’t one. Admit the problem as it stands can’t be resolved, that solutions at best merely hide or delay it, that the fundamental nature of the tools and platforms we’ve created enables both miracles and enormities. We can appreciate the former while doing our best to combat the causes of the latter. If people didn’t go around killing each other all the time, we wouldn’t be forced into uncomfortable acknowledgement of the fact that all is not well in the world. Wouldn’t that be nice?
So, the elevation of humanity’s ethical acumen. More of a long-term goal, I’d say.
The best policy
The thing is, the kind of philosophically-inclined defeatism I just endorsed isn’t really a crowd pleaser. When you’ve got a billion angry users and a board breathing down your neck, you’ve got to take action — even when there’s no action to take. But in this case can you, or more specifically Mark Zuckerberg, even say “we’re working on it”?
We know as well as he does that Facebook can’t prevent this stuff. The risk is baked into the platform. Real-time sharing is fundamental to the company’s vision of the future of communication. It’s too late to go back on that. The best they can hope to do is react faster.
Can he lie, or prevaricate, about their hopes to solve this, and get away with it? Because there isn’t really a way forward; in a few weeks or months, when something like this happens again, he’ll be called to account. The reality of the problem’s insolubility will catch up to any promises he makes. So why make them? People aren’t going to leave Facebook because it has no way of censoring the real world.
In this case, Zuckerberg’s position is stable enough that he may well tell the (relatively) unvarnished truth, albeit with the frame of reference changed a little. The risk of unwillingly hosting atrocities is inherent to Facebook’s mission (he will say) to connect all the people of the globe. There is no way to prevent it except by infringing on the privileges, perhaps even the rights, of those people — and Facebook will always decline to do that. What’s more, he may add, sunlight is after all the best disinfectant. We have to know of these people, these problems, before we can address them.
Shall we remain inside our safe little bubble, hearing only that which pleases us and seeing not that which frightens or confuses us? Shall we be forever free of cognitive dissonance, isolated from disharmony long enough that we forget that we are surrounded with it, a few of us merely lucky enough to have the choice of ignoring it? Well, maybe Zuck won’t say that exactly, but the sentiment, I suspect, may come through.
We’re only human, and the internet reflects that. What’s to fix?