It’s going to be harder for trolls to disrupt Periscope broadcasts. The Twitter -owned live-streaming app has offered chat moderation capabilities for years, but it has so far relied on group moderation. That means when users flagged a comment as abuse, spam or harassment, Periscope would randomly select a few other viewers to take a look and decide if that’s true. Violators would be banned if the users agreed. That worked well in some cases, but it still put control in the hands of the crowd, not the live streamer. Now, Periscope is changing that.
Instead of relying solely on group moderation, the company says broadcasters will instead be allowed to assign chat moderators before they start streaming.
These moderators can then watch the chat during the live broadcast and actively mute commenters in the audience who are disruptive.
After being muted, the person will not be allowed to chat for the remainder of the broadcast. This muting activity will be visible to anyone joining the broadcast from either Periscope or Twitter, but assigning chat moderators can only be done from Periscope, the company says.
When the live stream wraps, the broadcaster can then view a list of all the muted accounts and can choose to block those users from joining in future broadcasts.
The addition, which arrived alongside new replay editing tools, is another step toward improving the health of conversations on Periscope, the company claims. It follows another change announced this past summer, which focused on stricter enforcement of its rules around abuse and harassment.
Before, trolls whose comments were flagged during a broadcast were only temporarily blocked from chatting. They wouldn’t be able to comment on that live broadcast, but they could still join others in the future and continue to disrupt, threaten or abuse the video creator or the community.
The change that rolled out this summer made it so that those people who repeatedly got suspended for violating the guidelines would have their Periscope accounts reviewed and suspended.
Online harassment is not a new problem, to be sure, but the major social platforms have been struggling to get a handle on the issues.
In Twitter’s case, in particular, it’s been called out for being too tolerant of online harassment and hate speech, under the guise of protecting free speech. But Twitter has been trying to better handle abuse complaints, in more recent months, including through the acquisition of anti-abuse technology provider Smyte, which is helping to automate some of the processes here, as well as with the rollout of more stringent policies and anti-abuse features. Periscope hasn’t received as much attention, but is focusing on reducing the abuse that occurs during the real-time conversations on live broadcasts.
More info on how the new chat moderation feature works is here.