Jun 01, 2016, 16:53 pm
Periscope is to put its users in charge of policing offensive comments with a crowd-sourced post moderation system.
Comments flagged as inappropriate by one person will be instantly sent to others for review, and could result in a ban for the original poster.
The Twitter-owned video streaming app said "people in a broadcast are best suited to determine what's okay and what's not".
One expert welcomed the feature but said it could be misused in some cases.
Periscope said it wanted comment moderation on the platform to work in real time, to complement the live content in its app.
"Periscope is real-time, people should be able to report and moderate comments as they appear on the screen," it said in a blog post.
It said a handful of live-stream viewers would be chosen at random to vote whether reported comments were appropriate.
If the instant jury decides a comment is offensive, the writer will be temporarily banned from chatting in the live broadcast.
"It's potentially a good idea," Mark Griffiths, professor of psychology at Nottingham Trent University, told the BBC.
"People learn from experience, so if somebody writes a comment and gets blocked from the live chat, perhaps they'll see what they wrote in a different light."
However, crowd-sourced moderation could also be used to silence opinions in a live-stream on a political issue or other sensitive topic.
"There are good intentions behind it, but when it comes to abuse online, things can be quite subjective," said Prof Griffiths.
"As with anything like this, these systems can be abused if people want to abuse them, particularly in political conversations.
"Obviously the main message is that there are always other options available to users, and this is one additional reporting tool to the ones already on Twitter and Periscope," he said.
Originally Published: Wed, 01 Jun 2016 11:02:28 GMT
source
Comments flagged as inappropriate by one person will be instantly sent to others for review, and could result in a ban for the original poster.
The Twitter-owned video streaming app said "people in a broadcast are best suited to determine what's okay and what's not".
One expert welcomed the feature but said it could be misused in some cases.
Periscope said it wanted comment moderation on the platform to work in real time, to complement the live content in its app.
"Periscope is real-time, people should be able to report and moderate comments as they appear on the screen," it said in a blog post.
It said a handful of live-stream viewers would be chosen at random to vote whether reported comments were appropriate.
If the instant jury decides a comment is offensive, the writer will be temporarily banned from chatting in the live broadcast.
"It's potentially a good idea," Mark Griffiths, professor of psychology at Nottingham Trent University, told the BBC.
"People learn from experience, so if somebody writes a comment and gets blocked from the live chat, perhaps they'll see what they wrote in a different light."
However, crowd-sourced moderation could also be used to silence opinions in a live-stream on a political issue or other sensitive topic.
"There are good intentions behind it, but when it comes to abuse online, things can be quite subjective," said Prof Griffiths.
"As with anything like this, these systems can be abused if people want to abuse them, particularly in political conversations.
"Obviously the main message is that there are always other options available to users, and this is one additional reporting tool to the ones already on Twitter and Periscope," he said.
Originally Published: Wed, 01 Jun 2016 11:02:28 GMT
source