Facebook reports a drop in hate speech

Facebook is responding to whistleblower Frances Hogan’s testimony by trying to turn the narrative into hate speech.

The company’s Vice President of Integrity, Jay Rosen, posted: defense On measures to combat hate across the social network, he said that the decline in the emergence of hate speech matters more than just the presence of this content.

Rosen said the prevalence of hate across Facebook has fallen by nearly 50 percent in the past three quarters to 0.05 percent of content viewed, or about five views out of every 10,000.

He added: “The narrative that the technology we use to combat hate speech is insufficient and that we are deliberately misrepresenting our progress was wrong.”

“We don’t want to see hate on our platform, and neither do users or advertisers,” Rosen wrote. We are transparent about our work to remove them. What these documents make clear is that our integrity work is a multi-year journey. While we will never be perfect, our teams are constantly working to develop our systems, identify problems, and build solutions.

The executive claimed it was a mistake to focus on removing content as the only measure. Rosen said there were other ways to counter hate, and the company had to be confident before it removed any material.

This means taking care to avoid accidentally removing content, and limiting access to people, groups, and pages that are likely to violate policies.

The company sometimes ran into a problem because the content was mistakenly reported as hate speech, and the removal system could lead to more incidents. Likewise, hate will only have a limited effect if only a few people view a particular post.

In her testimony, Hugin emphasized that the company can only pick up a very small minority of offensive material, and if true, this remains a problem, even if only a small portion of users view the material.

Read also: Facebook makes AI see the world through your eyes

Facebook hopes to change the story after testifying against it

The post appears to be in response to article The Wall Street Journal reported that Facebook employees tasked with keeping offensive content off the platform did not believe the company was able to reliably screen it.

The Wall Street Journal report notes that internal documents show that two years ago, the company reduced the time human auditors focused on hate speech complaints. It also made other modifications that reduced the number of complaints.

This, in turn, helped create the appearance that the company’s AI was more successful in enforcing the rules than it actually was.

Rosen’s response also does not address Hugin’s claim that the company has resisted implementing safer algorithms to reduce hateful interactions.

And the company may be making strides in reducing hate. But this is not the view of Hugin, who says the social media company is not doing enough.

Read also: Facebook feed stripped of legal protection

Related Articles

Leave a Reply

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker