Facebook says it took down 2 million terrorism posts in 2018

Posted May 16, 2018

Rosen revealed that the social media company made a decision to publish the Community Standards Enforcement Report.

Facebook said it released the report to start a dialog about harmful content on the platform, and how it enforces community standards to combat it.

The report details Facebook's enforcement efforts from October to March and covers hate speech, fake accounts and spam, terrorist propaganda, graphic violence, adult nudity and sexual activity.

Facebook today published its annual transparency report, and for the first time included the number of items removed in each category that violated its content standards. The company disabled 583 million fake accounts, many within minutes of registration. A Bloomberg report last week showed that while Facebook says it's become effective at taking down terrorist content from al-Qaida and the Islamic State, recruitment posts for other US -designated terrorist groups are found easily on the site.

Facebook took action against 2.5 million pieces of content in the first quarter, up 56 per cent over the previous quarter.

More news: Golden State Warriors and Houston Rockets move to brink of conference finals

In the first quarter, the company took down 837 million pieces of spam, almost 100 percent of which was found and flagged before anyone reported it.

Facebook removed 2.5 million pieces of hate speech in the three months to March, a rise of more than half from the three months prior.

Facebook has been in hot water following allegations of data privacy violations by Cambridge Analytica, an election consultancy that improperly harvested information from millions of Facebook users for the Brexit campaign and Donald Trump's U.S. presidency bid.

Facebook is struggling to block hate speech posts, conceding its detection technology "still doesn't work that well" and it needs to be checked by human moderators.

As the largest social network, Facebook is home to billions of users and billions more posts, photos and videos shared on a daily basis.

More news: Deadpool's Ryan Reynolds Kills Korean Karaoke While Dressed As Unicorn

Facebook's technology is good at removing nudity and violence, but not at removing hate speech.

"Of every 10,000 content views, an estimate of 22 to 27 contained graphic violence, compared to an estimate of 16 to 19 last quarter", Xinhua quoted the report as saying. Zuckerberg explained that Facebook is hiring thousands of people who can, over the course of millions of content decisions, train a better artificial intelligence system. But of the more recent total, only 38 percent was flagged by Facebook before users reported it (an improvement on the 23.6 percent in the prior three months).

"We have a lot of work still to do to prevent abuse".

"We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too", wrote Rosen. The company attributed the decline to the "variability of our detection technology's ability to find and flag" fakes.

More news: Jose's United are happy to get the point they needed