Dec 04, 2020, 01:23 am
(This post was last modified: Dec 04, 2020, 01:24 am by Resurgence. Edited 1 time in total.)
YouTube will begin warning users before they post comments that may be offensive to other people, the company announced Thursday.
The new feature is part of the video-sharing platform’s efforts to address widespread racist and homophobic harassment targeted at creators by commenters and other accounts.
YouTube will also begin proactively asking users to provide demographic information in an effort to find patterns of hate speech “that may affect some communities more than others.”
The company last December beefed up its policy on harassment, saying it would be taking a stricter stance on "veiled or implied threats” moving forward.
The company touts that since the beginning of 2019 it has increased the number of daily hate speech comment renewals by 46-fold.
However, hateful content remains rampant on the platform.
The strategy of warning users that their comments may be offensive has been tested by other platforms.
Instagram began giving users pop-ups asking if they are sure they want to post comments that might violate the app’s guidelines in July 2019. It expanded these “nudge warnings” this October.
Instagram has said that early trials of the pop-up yielded positive results.
A study conducted by OpenWeb and Google’s AI conversation platform released in September attempted to quantify the effects of comment feedback by analyzing 400,000 comments on news websites.
The study found that for roughly a third of commenters, seeing a warning caused them to revise their comments. A little over half of those who edited their comments changed them in a way that no longer violated community standards, while roughly a quarter simply reworked their comments to avoid triggering the automated system.
Thirty-six percent of users exposed to the warnings posted the comment anyways.
Those results seem to line up with previous research by Coral, a commenting platform owned by Vox Media that is used by many media sites.
That 2018 study found that over a six-week period, 36 percent of those who received feedback on one of McClatchy’s websites edited their comment to reduce toxicity.
---------------------------------------------------------------------------------------------------------------------------------------------------------------
"...edited their comment to reduce toxicity."
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
https://thehill.com/policy/technology/52...-offensive
The new feature is part of the video-sharing platform’s efforts to address widespread racist and homophobic harassment targeted at creators by commenters and other accounts.
YouTube will also begin proactively asking users to provide demographic information in an effort to find patterns of hate speech “that may affect some communities more than others.”
The company last December beefed up its policy on harassment, saying it would be taking a stricter stance on "veiled or implied threats” moving forward.
The company touts that since the beginning of 2019 it has increased the number of daily hate speech comment renewals by 46-fold.
However, hateful content remains rampant on the platform.
The strategy of warning users that their comments may be offensive has been tested by other platforms.
Instagram began giving users pop-ups asking if they are sure they want to post comments that might violate the app’s guidelines in July 2019. It expanded these “nudge warnings” this October.
Instagram has said that early trials of the pop-up yielded positive results.
A study conducted by OpenWeb and Google’s AI conversation platform released in September attempted to quantify the effects of comment feedback by analyzing 400,000 comments on news websites.
The study found that for roughly a third of commenters, seeing a warning caused them to revise their comments. A little over half of those who edited their comments changed them in a way that no longer violated community standards, while roughly a quarter simply reworked their comments to avoid triggering the automated system.
Thirty-six percent of users exposed to the warnings posted the comment anyways.
Those results seem to line up with previous research by Coral, a commenting platform owned by Vox Media that is used by many media sites.
That 2018 study found that over a six-week period, 36 percent of those who received feedback on one of McClatchy’s websites edited their comment to reduce toxicity.
---------------------------------------------------------------------------------------------------------------------------------------------------------------
"...edited their comment to reduce toxicity."
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
https://thehill.com/policy/technology/52...-offensive